text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Rendering New Content Types in the Lobo Java Web Browser are already rendered, to varying degrees of sophistication. How would one add a renderer for a new file type? Read on for a full explanation.
Start off by reading the Clientlets in Lobo page and you will have a lot of information, with some significant gaps. First of all, exactly as described in that document, I created a Clientlet for rendering my content:
public class TextClientlet implements Clientlet {
@Override
public void process(ClientletContext context) throws ClientletException {
try {
InputStream in = context.getResponse().getInputStream();
try {
String text = IORoutines.loadAsText(in, "ISO-8859-1");
JTextArea textArea = new JTextArea(text);
textArea.setEditable(false);
JScrollPane pane = new JScrollPane(textArea);
context.setResultingContent(pane);
} finally {
in.close();
}
} catch (IOException ioe) {
throw new ClientletException(ioe);
}
}
}
So, based on a context received from somewhere, a Swing component is created for rendering our content. Here it is very rudimentary, since it is just an example. But we would typically use the class above for parsing the received content and then deciding how to color it, for example. Here, no styling is done at all. The received content is simply put into a text area, which is put into a scrollpane, which in turn is passed to ClientletContext.setResultingContext , for display in the browser. Also note the org.lobobrowser.util.io.IORoutines class, which provides convenience methods for loading our content. If you want to use this class, you will need the lobo.jar on your classpath, in addition to the lobo-pub.jar, which was mentioned yesterday.
That's all well and good. But, how to pass content to the Clientlet? And how to determine which content to pass? Let's say, for the sake of argument, that we want to render Manifest files in the Lobo browser. By default, when you try to open a Manifest file, you will get this message:
So, we need to have Manifest files recognized and, when they are recognized, we want to have the Clientlet above render the Manifest file's content. Yesterday we looked at the NavigatorExtension class, which is the "main class" for a Lobo plugin, that is, its entry point. At that point, we only looked at the NavigatorExtension.windowOpening method. This time, we need to work with the NavigatorExtension.init method. This is where the Clientlet page is pretty scanty in details: "Currently only extensions (plugins) can set up clientlets in Lobo, by calling addClientletSelector when the extension is initiliazed."
So, after studying the related Javadoc (which is all pretty good, if you piece the bits together), I came up with this in my NavigatorExtension.init method:
private ClientletSelector selector;
private TextClientlet textClient;
@Override
public void init(final NavigatorExtensionContext ctx) {
selector = new ClientletSelector() {
public Clientlet select(ClientletRequest request, ClientletResponse response) {
if (response.matches("", new String[]{".mf"})){
textClient = new TextClientlet();
return textClient;
}
return null;
}
public Clientlet lastResortSelect(ClientletRequest arg0, ClientletResponse arg1) {
return null;
}
};
ctx.addClientletSelector(selector);
}
The most interesting line is line 8. Here we can EITHER specify a MIME type OR a (list of) file extension(s). So, in this case, I put a file extension there, that is, the file extension of Manifest files. By the way, I challenge you to find the above snippet anywhere online (or, anywhere else). It's the bit of magic that connects the Lobo plugin's main class to the clientlet that will render the Manifest file.
And that's all you need. Now you're good to go. Remembering the properties file discussed previously, compile the plugin, put the JAR in the distro's ext folder, and when you restart the browser, you will be able to open Manifest files in the browser:
Of course, instead of a Manifest file, you could have anything else. Maybe JavaFX, for example.
- Login or register to post comments
- 2275 reads
- Printer-friendly version
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
|
http://java.dzone.com/news/rendering-new-content-types-lo
|
crawl-002
|
refinedweb
| 661
| 56.86
|
HDMI TO CSI-2
Contents
Overview
- This is a raspberry pi HDMI to CSI-2 Module with Toshiba TC358743XBG chip, HDMI input supports up to 1080p25fps.
According to the customer feedback, this module does not support OctoPi.
Packing List
- 1 x Raspberry Pi HDMI to CSI-2 Module
- 1 x FFC Cable(14cm/5.51inch length)
Features
- Input signal: HDMI;
- Output signal: CSI;
- HDMI Input: 720p50\720p60\1080i50\1080p25
- Function: HDMI to CSI-2
- Limitation: HDMI input supports up to 1080p25fps
- Usage: Same as standard Raspberry Pi camera
- Chip: Toshiba TC358743XBG
- Compatible with: Raspberry Pi 4B/3B+/3B/2B/B+/3A+/Pi Zero/Zero W
Document
- Chip Information-EN: File:TC358743XBG datasheet en 20171026.pdf
- Chip Information-CN: File:TC358743XBG datasheet zh cn 20151218.pdf
How to check whether this module is driven correctly?
step1. Check this module if be driven?
After connecting all the cables, power on the Raspberry Pi, the C779 indicator light is normally green, and after opening the Raspberry Pi terminal, enter the following command:
ls /dev
then check if video0 appears. If it appears correctly, it means that the module has been successfully driven and is working normally.
Please try the following oreration if you can't find 'video0'
step2. update & upgrade the raspberry pi system (It may be necessary to update the software source according to your country, and this will take a long time)
sudo apt-get update sudo apt-get upgrade
step3. Open camera module via raspi-config command;
sudo raspi-config
Enable camera & reboot the raspberry pi.
step4: goto step1
step5: if you still can't find 'video0' file in the /dev path, please try the following methods:
A: Confirm whether the HDMI input device has a signal, (you can test whether it is displayed normally by connecting to the screen)
B. Confirm whether the resolution and frame rate of the HDMI input device are below the maximum input resolution and frame rate(720p50\720p60\1080i50\1080p25);
C: We recommend that you download the lastest official image of Raspberry pi if you still don't solve your questions. Download URL:
Video
Test video:
Q1: What to do if the module can't work normally?
A1:
- 1. Please use monitor to test, DO NOT use VNC.(Customer Feedback: During the VNC it spend some of gpu and there is not enough for camera.Based on RPi 4 + Hawkeye Firefly Mini)
- 2. First the HDMI device should be plugged in and have signal output before the Raspberry Pi is started.
- 3. Please check if there is a video related file in the /dev file.
- 4. Please provide us more details for us to confirm the issue.
a. First please send us your order number and tell us "Call command" "Input Device" and "Using Device".
b. What is the HDMI input device, resolution and frequency?
c. Which version of Raspberry Pi you use?
d. What is the specific calling command?
e. What is the terminal error notification?
Q2: Some python sample code
A2: The HDMI source supported by Raspberry Pi with the module is 720p/50fps,720p/60fps,1080i/50fps,1080p/24fps,1080p/25fps. Lower resolution is also working.
This is the python code used in my video.The Pi uses an official image with no other changes.
from picamera import PiCamera from time import camera = PiCamera() camera.start_preview() sleep(1000) camera.stop_preview()
Q3:Customer Feedback 1 for your reference()
A3: If you want to use your Raspberry Pi for HDMI capture, this is the only device I'm aware of that will do it. Furthermore you can do some powerful things that would normally require equipment costing many hundreds of dollars. For example, using always-on camera preview and a few lines of Python code you can easily do image flipping, rotation, and rudimentary scaling. Note that you can't adjust color or exposure, and audio is not passed through.
I've tried this with a variety of HDMI devices; half of them work perfectly, half of them don't work at all.
Things that worked just fine:
- GoPro Hero2
- Generic no-name HDMI camera
- OREI HD-102 1x2 HDMI splitter with a Google Chromecast attached to it (but see below)
There's a downside, though. I could not get any of these to work:
- Canon 6D, which causes a "PiCameraMMALError: Failed to enable connection: Out of resources" error
- Blackmagic ATEM Mini, which produces a scrambled picture
- Google Chromecast, because I don't think this device supports HDCP; but it works fine if you strip off the HDCP.
This HDMI input module does what I care about (capturing my generic HDMI camera), but it failed at some things that thankfully I didn't need it to do. Your use case may vary so don't be surprised if some HDMI devices don't work with it.
Q4: Will this work with i2s hats such as hifiberry amp2?
A4: Since this item don't use any GPIO, so we think that it can work with I2S hats.
Enable comment auto-refresher
Anonymous user #10
Permalink |
Anonymous user #9
Permalink |
Anonymous user #7
Permalink |
Anonymous user #6
Permalink |
Anonymous user #8
Anonymous user #5
Permalink |
Anonymous user #4
Permalink |
Anonymous user #4
Permalink |
Anonymous user #3
Permalink |
Anonymous user #2
Permalink |
Xiali
Anonymous user #1
Permalink |
Cindy
|
http://raspberrypiwiki.com/HDMI_TO_CSI-2
|
CC-MAIN-2021-17
|
refinedweb
| 882
| 61.06
|
The filesystem should be familiar to anyone who owns a computer. It is responsible for the hierarchical organization of files and projects, with the option to read, write, move, and delete files. In AIR for Android, files can only be created and manipulated in the application storage directory or on the SD card.
The File class belongs to the flash.filesystem package and inherits from the FileRefer ence class. It represents the path to a directory or a file, regardless of whether the directory or file exists or has not yet been created.
You can create a File object to the path of the file you want to access, here the application storage area. The resolvePath function creates the path from the current location which is the .swf file that is currently running:
import flash.filesystem.File;
var file:File
= File.applicationStorageDirectory.resolvePath(“hello.txt”);
Data is read or written using FileStream. Create it, open it, and close it after the operation is complete.
The FileStream method you should use depends on the type of data you are encoding. For text files, use readUTFBytes and writeUTFBytes. For a ByteArray, use readBytes and writeBytes, and so on. To get a complete list of methods, refer to.
Writing data to the file
Use the FileMode.WRITE method to write data to the file. In the following example, we are creating a folder if one does not already exist, and then we are creating a file in the folder:
import flash.filesystem.File;
import flash.filesystem.FileStream;
import flash.filesystem.FileMode;
var folder:File
= File.applicationStorageDirectory.resolvePath(“greetings”);
if (!folder.exists) {
folder.createDirectory();
}
var file:File = folder.resolvePath(“hello.txt”);
Note that although a reference to the new file has been created, the file will not exist until data is saved in it. Let’s write a simple string:
var fileStream:FileStream = new FileStream();
fileStream.open(file, FileMode.WRITE);
fileStream.writeUTFBytes(“Can you hear me?”);
fileStream.close();
Reading a file
Use the FileMode.READ method to read a file. In the following example, if the file doesn’t exist, we don’t need to read its data:
var file:File
= File.applicationStorageDirectory.resolvePath(“greetings/hello.txt”);
if (!file.exists) {
return;
}
var fileStream:FileStream = new FileStream();
fileStream.open(file, FileMode.READ);
var string:String = fileStream.readUTFBytes(fileStream.bytesAvailable);
fileStream.close();
trace(string); // Can you hear me?
Deleting a file
Let’s delete the file first:
var file:File
= File.applicationStorageDirectory.resolvePath(“greetings/hello.txt”);
if (file.exists) {
file.deleteFile();
}
Now let’s delete the directory. Note the boolean passed in the deleteDirectory function. This is to prevent errors. If the boolean is set to true, the folder is deleted only if the directory exists:
var folder:File
= File.applicationStorageDirectory.resolvePath(“greetings”);
folder.deleteDirectory(true);
Choosing between synchronous and asynchronous mode
You have a choice of synchronous or asynchronous mode (fileStream open or openA sync). The former uses the application’s main thread, so the application does not execute any other process until a command is completed. Use trace catch to capture any errors. The latter uses a different thread and runs in the background. You need to set listeners to be notified when the command is in progress or completed. Only one mode can be used at a time.
Asynchronous mode is often preferred for mobile development. Choosing one method over the other is also a function of how large the data set is, as well as if the saved data is needed in the current state of your application and the desired user experience.
The write and read examples shown earlier use a synchronized method. To trace errors, use a try catch statement:
try {
var string:String = fileStream.readUTFBytes(fileStream.bytesAvailable);
fileStream.close();
} catch(error:Error) {
trace(error.message);
}
In this asynchronous example, we read a text file and store it when it is received in its
entirety:
import flash.events.Event;
import flash.events.ProgressEvent;
var fileStream:FileStream;
var file:File
= File.applicationStorageDirectory.resolvePath(“hello.txt”);
if (!file.exists) {
return;
}
fileStream = new FileStream();
fileStream.addEventListener(ProgressEvent.PROGRESS, onProgress);
fileStream.addEventListener(Event.COMPLETE, onComplete);
fileStream.openAsync(file, FileMode.READ);
function onProgress(event:ProgressEvent):void {
trace(fileStream.bytesAvailable);
}
function onComplete(event:Event):void {
fileStream.removeEventListener(ProgressEvent.PROGRESS, onProgress);
fileStream.removeEventListener(Event.COMPLETE, onComplete);
var bytes:uint = fileStream.bytesAvailable;
fileStream.close();
}
Writing data and saving it to temporary files
You can write and save data to a temporary directory or file. The data is saved to the application cache. This method is particularly useful if you have data you want to temporarily save during the life of the application while keeping memory free. You can create a directory or a file:
var tempDirectory:File = File.createTempDirectory();
var tempFile:File = File.createTempFile();
trace(tempDirectory.nativePath, tempDirectory.isDirectory);
trace(tempFile.nativePath, tempFile.isDirectory);
Keep a variable reference to the data while the application is running, and delete the file when the application quits. If you want to make the cache data persistent, save its path using its nativePath property, and save its name using its name property, in a SharedObject or another file in the application storage directory. Finally, another good place to save temporary files is the SD card:
tempFile.deleteFile();
tempDirectory.deleteFile();
Unlike AIR on the desktop, when files are deleted they are removed immediately because the device doesn’t have a trash folder. The following command gives the same result as the ones shown earlier:
tempFile.moveToTrash();
tempDirectory.moveToTrash();
In addition to these features, you can also add data to the end of a file; when updating the file, you can read it and write to it at the same time. Also useful is the ability to copy files and folders and move them to another location. You can use this technique to move some the application assets to the SD card. Note, however, that you should never delete any files that came installed with your application, because that will invalidate it.
|
https://www.blograby.com/developer/the-filesystem.html
|
CC-MAIN-2019-13
|
refinedweb
| 988
| 51.14
|
Ok. Here is a suggestion, call it #9. It incoporates several ideas
floating around:
- It uses the Python/Ruby style of name resolution, as suggested
by T.Onoma and Why. That is, you check for a local (aka private)
package first, next you check built-in packages, and failing that,
an exception is raised.
- It incorporates David's suggestion of limiting built-in types to
only words (but allowing the '/'). This helps reduce the chance of
collisions, you can be sure that resolution of built-in packages
will always fail if you use names like "Perl::Package" or
"com.company.JavaPackage', etc.
- It also incorporates David's suggestion of using 'implicit-plain'
and 'implicit-not-plain' tags to make implicits easier to grok;
this happens to put some very nice makeup on a ugly wart.
- It follow's T.Onoma's request that he be able to specify a
private tag that is _not_ subject to default %TAG cooking.
It make it possible to _expressly_ disable cooking no matter
what %TAGs are present
- It allows people to use YAML tags in most cases without problem,
and, but, if they really want to be super-safe they would need
to use explicit %TAG based typing.
- It provides a model for Brian's notion that the Application
is the final authority of what each node's tag is; that is,
the proposal formalizes ambiguity.
- It incorporates, for the first time, a rationalization of
how implcit typing should be done; which is still poorly
defined and explained in the specification.
First, let me review/define the types of 'serialization' tags:
- Global tags are those that are globally unique, traditionally,
these have been URIs; that is, they start with a word followed by a
colon and use only URI characters. Strictly speaking, Perl::Packages
happen to match this production, so they could also be considered
global even though they are not URIs.
- Private tags are those that have meaning local only to a given
processing environment. They are convient to use, but may conflict
with other uses. Therefore, they should be used carefully but, in
most 99% of cases, there just isn't a problem with collisions.
- Magical tags are those which are explicitly provided, but happen
to not be Global nor Private. It is not necessary that magic
tags be used; as a combination of global or private tags would
suffice for many purposes.
- Missing tags are those that are not provided in the YAML
syntax. These have been traditionally been called "implicit" tags,
but please use "missing" instead, as it is far more clear.
Then, we define a process, called 'Cooking', which is done by the parser
and is purely a syntax-only operation on a Document's tags. The cooking
process uses the %TAG directive to change magical tags into either
Global tags, or Ambiguous tags (defined below). This is done without
any application involvement and is completely defined by the YAML
specification.
- Ambiguous tags are Magical tags which do not become 'Global' during
the cooking process. They are also Missing tags, with the following
names (provided by the Cooking process):
plain scalar -> !implicit-plain
non-plain scalar -> !implicit-scalar
mapping -> !implicit-mapping
sequence -> !implicit-sequence
Therefore, the result of the 'Cooking' process is a non-empty
tag having either Global, Private, or Ambiguous tags. While
it is not strictly necessary to give mappings and sequences
non-empty tags, it is done for consistency.
Then, we have another process, called 'Resolution' converts Ambiguous
tags into either Global or Private tags. Unlike cooking, this is an
application-directed process; probably carried out by the YAML Processor
via given instructions. The information used by the resolution process
is restricted to that provided in the YAML Representational Model. In
particular, 'Resolution' should be viewed as a transformation of the
YAML graph, the result of resolution _is_ a different YAML document,
albeit one that will typically be directly related to the source
document plus schema information. Note that 'Resolution' does not in
any way affect Global nor Private tags. Thus, one can provide a private
or global tag, and no matter how the resolution process is defined, it
will be passed through unchanged.
The last stage of processing, 'Recognition' usually happens during
loading, where each node's tag is used to "find" an appropriate native
data type and construct the appropriate binding. If a tag is not
'recognized' during this process, it is an error.
states: { O: Orignal, C: Cooked, R: Resolved }
category: { G: Global, P: Private, _: Missing,
M: Magic, A: Ambigous, '*': Depends }
In a more concreate form,
--- # OCR After-Cooking
- ! # GGG
- !Perl::Package # GGG Perl::Package
- !!private # PPP private
- # _A* implicit-plain
- '' # _A* implict-scalar
- !int # MA* int
...
%TAG clarkevans.com,2004: #default namespace
--- # OCR After-Cooking
- ! # GGG
- !Perl::Package # GGG Perl::Package
- !!private # PPP private
- # _A* implicit-plain
- '' # _A* implict-scalar
- !int # MGG tag:clarkevans.com,2004:int
...
%TAG clarkevans.com,2004: cce
--- # OCR After-Cooking Resolve?
- ! # GGG No
- !Perl::Package # GGG Perl::Package No
- !!private # PPP private No
- # _A* implicit-plain Yes
- '' # _A* implict-scalar Yes
- !cce^int # MGG tag:clarkevans.com,2004:int No
- !int # MA* int Yes
...
Basically, in this proposal, which we can call #9 if you wish,
is much like #8, only that the default is not private; it is
the process of:
- check for private matches, if not,
- check for any 'regex' based matches
- use matches from tag:yaml.org,2004,
namely !str, !map, !seq for implicit-s
- raise an exception.
So, it attempts to blend the 'implicit' mechanism with the
!unambiguous tags. If people use !ambiuous tags... well,
that's their choice; possibly enough rope so they can do
cool things; or, perhaps enough rope to hang themselves,
but, in any event, using ambiguous tags (implicit, or
non-private non-global tags) _is_ recognized as a transofrmation
of the YAML document and treated appropraitely.
Cheers!
Clark
View entire thread
|
http://sourceforge.net/p/yaml/mailman/message/11694854/
|
CC-MAIN-2015-48
|
refinedweb
| 984
| 54.83
|
megrok.menu 0.4
Grok extension to configure browser menus
This package allows you to register browser menus and menu items for browser views in Grok.
A menu is easily registered by creating a subclass of megrok.menu.Menu:
import megrok.menu.Menu class Tabs(megrok.menu.Menu): grok.name('tabs') grok.title('Tabs') grok.description('')
A view can then placed on a menu with the menuitem directive:
class Edit(grok.View): grok.title('Edit') grok.description("Change this object's data.") megrok.menu.menuitem('tabs') ...
The title and description directives used here specifie the menu item's label and description. The menuitem directive takes at least one argument, the menu that the item is registered to be for. This can either be an identifier string or the menu class itself. Other optional parameters include icon, filter, order and extra.
For more use cases and examples, take a look to tests/test_functional.py
Changelog
0.4 (2010-03-06)
- Cleaned the tests module. Now, we only use the ZTK packages to test.
- The dependencies have been cleared. We no longer depend on zope.app packages.
- Updated the security grokking in the menu items grokker. We don't need the protect_getattr, as the view security grokker already does it for us.
- Fixed the dependencies in the package requires. All dependencies are now clearly declared.
- Added a LICENSE.txt file for the ZPL 2.1.
0.3 (2009-11-02)
- Added the support of the grokcore.viewlet 'order' directive to reorder the menu items and sub menus. This permits to have a baseclass defining the basic menu and to keep the ordering possibility in the sublasses. We probably should do that for the different arguments of the menuitem directive. That would allow more genericity and reusability. Note : this change is 100% backward compatible. Simply added tests to show the behavior. [trollfot]
- Get rid of the grok dependency. Now depends only on grokcore.* packages
- Updated the build process
0.2 (2009-03-02)
- Compatible with grok1.0a1
- Add the SubMenuItem base class and it's grokker SubMenuItemGrokker
- Add the extra parameter to the menuitem directive
- Tests for added functionality
- Remove version.cfg
0.1 (2008-07-12)
Initial release.
- Author: The Grok community
- License: ZPL
- Categories
- Package Index Owner: philikon, faassen, sancho, trollfot
- Package Index Maintainer: sancho
- DOAP record: megrok.menu-0.4.xml
|
http://pypi.python.org/pypi/megrok.menu/
|
crawl-003
|
refinedweb
| 391
| 61.93
|
Computer Science Archive: Questions from July 21, 2011
- Anonymous askedCreate class IntegerSet for which each object can hold integers in the range 0 through 100. A set is... Show moreCreate.
• Show less2 answers
- home44 askedI was needing to know how to take information from an input file and put it directly to an output fi... Show moreI was needing to know how to take information from an input file and put it directly to an output file. What would the code be? See example
The input file that I am given has:
f 3.40
f 4.00
m 3.56
m 3.80
What do I need to write in order to get it to print directly to the output file? • Show less1 answer
- MachoBattleship218 asked1 answer
- Anonymous askedname and an output file name and saves... Show more
Write a program that prompts the user to enter an input file• Show less
name and an output file name and saves the encrypted version of the the input file to the output file. Encode the file by adding 5 to every byte in the file. Also need to decrypt the file. In Java, please.1 answer
- Anonymous askedJButton. Rewrite a program that draw... Show moreDisplays a checkerboard in which each white and black cell is a
JButton. Rewrite a program that draws a checkerboard on a JPanelusing
the drawing methods in the Graphics class. • Show less0 answers
- Anonymous askedwrite a program that displays four icons, in four labels. add a line border on each label (use an i... More »0 answers
- Anonymous askedA register of club members can be created i... Show moreAny help would be appreciated!
Club Membership Linked List
A register of club members can be created in the form of a linked list. Each member is represented by an element or node in the list consisting of Identification Number, Name, Phone, Status Code and Income. The code to define the necessary elements to construct a linked list of member elements is as follows:
struct NodeType
{
long Id_Num;
string Name;
string Phone;
char Status;
float Income;
NodeType* Next;
};
This structure can be coded as part of private or it could appear above the class declaration.
Code a program that will read the 5 records in the associated input text file (Member.dat) and store the data into a dynamically allocated linked list. To open the input text file, only the filename (Member.dat) should be specified in the open statement in your source code.
After the initial creation of the linked list from the input text file, the program shall be menu driven with the menu presenting the user with the following options:
1. Add a member
2. Remove a member
3. Display a member
4. Display all members
5. Quit
Create a linked list class and include the above data items in the private section of the class and include the appropriate supporting member functions in the public section of the class. A function to search through the linked list to find (or not find) a member shall be private because it will only be called by the add, remove and display functions and will be transparent to the client. Provide appropriate error messages for each of the following conditions: (1) invalid menu option selected; (2) an attempt is made to add a member who already exists in the list; (3) an attempt is made to remove a member not contained in the list; (4) an attempt is made to display a member not contained in the list: (5) negative id entered when adding a member.
When the option is selected to display all members, format the data as a table: all fields are columns and each member record takes a row. All text fields are aligned and left justified and numeric fields are right justified. Records should be keyed on the Identification Number. All output should go to the screen; no file output is required.
• Show less0 answers
- Anonymous asked1. Suppose we want to extend the Echo class to a class called DoubleEcho, which echoes the text file1 answer
- Anonymous askedThis program is designed to analyze the growth of two cities. Each city has a s... Show moreProgramming..
For each of the two programming problems, create a program using Visual C++.Net. Make sure to capture a sample of your program’s output. The best way to do this is to click on the console window you want to capture and then press the Alt and PrintScreen keys at the same time. Then paste your captured screen image into a Word document. For each of the two programs, put the screen capture followed by a copy of your source code into your Word document.) b = b /.
• Show less1 answer
- SparklingTea1019 askedWrite a menu-driven mini-statistics package. A user should be able to enter up to 200 i... Show more
In C Language
Write when you are done with data input.
Item #1 : 25
Item #2 : 36
Item #3 : 27.5
Item #4 : 28
Item #5 : 32
Item #6 : 33.25
Item #7 :
This program will perform the following:
1) Enter data.
2) Display the data and the following statistics:
the number of date item, the high and low values in the data, the mean, median, mode, variance and standard deviation.
3) Quit the Program• Show less1 answer
- FamousSpoon9151 askedWrite a program that can be used by a ski resort to keep track of local snow conditions for one week... Show moreWrite a program that can be used by a ski resort to keep track of local snow conditions for one week. It should have a seven element array of structures, where each structure holds a date and the number of inches of snow in the base on that date. The program should have the user input the name of the month, the starting and ending date of the seven-day period being measured, and then the seven base snow depths. The program should then sort the data in ascending order by base depth and display the results. Here is a sample report.
Snow Report December 12 - 18
Date Base
13 42.3
12 42.5
14 42.8
15 43.1
18 43.1
16 43.4
17 43.8
• Show less1 answer
- WhisperingFrog1202 askedWrite a program that reads one or more sets of numbers from the user. Each set of numbers consists o... Show more
Write a program that reads one or more sets of numbers from the user. Each set of numbers consists of an integer N followed by N integer values. These values will be in the range from -1,000,000 to 1,000,000 (guaranteed). Your task is to compute the mean, median, and mode of each set of N integers. N may range from 1 to 25.
As output for each set of numbers, print a single line containing the word "Mean" followed by the mean, the word "Median" followed by the median, and the word "Mode" followed by the mode, leaving enough spacing on the line to allow easy readability. Subsequent lines should be aligned with the first line (thus creating columns). Fractional portions should be correct to at least two decimal positions.
Definitions for purposes of this problem:
Mean is defined as the result of dividing the sum of the N numbers by N, the number of numbers.
Median is the "middle" number. That is, in an ordered list of numbers, the one in the middle of the list is the median. If N is even, then the mean of the two numbers in the middle is the median.
Mode is the value that occurs most often in the list. In case of ties, report the smallest value that meets the criteria.
Sample input:
How many numbers 5
Please enter 5 numbers
1
3
5
4
5
Mean: 3.60 Median: 4 Mode: 5
Go again (y/n) y
How many numbers 4
Please enter 4 numbers
6
2
5
25
Mean: 9.50 Median: 5.50 Mode: 2
Go again (y/n)
--------------------------------------------------------------------------------
Development Details
Design your code in as modular a fashion as possible. Individual tasks should go in individual functions. You WILL be graded heavily on this.
No set of numbers will be bigger than 25. This means you will declare a static array of 25. You will fill that array with as many numbers as specified within the file. You will have to ensure you don’t use more of the array than required.
Error checking for range is a must
Name the C file that contains your main method cscd255hw3.c
main should look like this
int main(int argc, char *argv[])• Show less
{
int numbers[MAX];
int total;
double mean, median, mode;
do
{
total=readTotal();
fillArray(numbers, total);
sort(numbers, total);
mean=findMean(numbers, total);
median=findMedian(numbers, total);
mode=findMode(numbers, total);
printArray(mean, median, mode);
}while(goAgain());2 answers
- home44 askedThis program works great, but I am needing to get the input file information and print it directly t... Show more
This program works great, but I am needing to get the input file information and print it directly to the output file. Here I have the input file info, then the program, and then what the output file should look like. The yellow portion is what I am having an issue with. This info is pulled directly from the input file.• Show less
This is in the input file
-------------------------------------------------------------------
#include
#include
using namespace std;
//Variables
int countMale, countFemale;
float sumMaleGPA, sumFemaleGPA;
float avgMaleGPA, avgFemaleGPA;
//Open input and output files
void openFiles (ifstream &inFile, ofstream &outFile)
{
//Get information from input file
inFile.open ("Ch7_Ex6Data.txt");
if (inFile.fail())
{
cout <<"Can not find the file input.txt";
cout <<"Exiting the program...";
system ("pause");
exit(0);
}
//Opens the output file
outFile.open ("Ch7_Ex6out.txt");
//Set precision
outFile.setf(ios::fixed,ios::floatfield);
outFile.precision (2);
//Output to console
cout.setf(ios::fixed,ios::floatfield);
cout.precision(2);
}
void initialize()//Initializing Variables
{
countMale=0;
countFemale=0;
sumMaleGPA=0.0;
sumFemaleGPA=0.0;
avgMaleGPA=0.0;
avgFemaleGPA=0.0;
}
//Sum of the student grades
void sumGrades (ifstream &inFile)
{
char gender;
double gpa;
while (!inFile.eof())
{
inFile>>gender;
inFile>>gpa;
if(gender=='M'||gender=='m')
{
sumMaleGPA += gpa;
countMale++;
}
else
{
//Initialize Variables
sumFemaleGPA += gpa;
countFemale++;
}
}
//Close input file
inFile.close();
}
//Find average student grades
void averageGrade()
{
avgMaleGPA=sumMaleGPA/countMale;
avgFemaleGPA=sumFemaleGPA/countFemale;
}
//Print
void printResults(ofstream &outFile)
{
outFile<<"Sum female GPA = "< cout<<"Sum female GPA = "< outFile<<"Sum male GPA = "< cout<<"Sum male GPA = "< outFile<<"Female count = "< cout<<"Female count = "< outFile<<"Male count = "< cout<<"Male count = "< outFile<<"Average female GPA = "< cout<<"Average female GPA = "< outFile<<"Average male GPA = "< cout<<"Average male GPA = "<
//Close output file
outFile.close ();
}
int main()
{
//input file
ifstream inFile;
//output file
ofstream outFile;
//Calls
openFiles(inFile,outFile);
initialize();
sumGrades(inFile);
averageGrade();
printResults(outFile);
system ("pause");
return 0;
}
----------------------------------------------------------
This is what the output file should look like
Processing grades.
Sum female GPA =
Sum male GPA =
Female count =
Male count =
Average female GPA =
Average male GPA =1 answer
- Anonymous askedYou are to develop an Air Ticket Reservation database systemDescription of current business activiti... More »1 answer
- Anonymous askedr... Show moreWhat screen output does the following code segment produce?
p = 2 q = 4
while p < q
Output “Adios”
r = 1
while r < q
output “Adios”
r = r + 1
endwhile
p = p + 1
endwhile • Show less1 answer
- Anonymous askedDesign a program that allows a user to enter 10 numbers, then displays them in the reverse order of ... More »1 answer
- Anonymous askedDesign a class named CustomerRecord that holds a customer number, name, and address. Include methods... More »1 answer
- Anonymous askedCreate a new JAVA class inside your project folder. The name of your new JAVA class should be ListSt... Show more
Create a new JAVA class inside your project folder. The name of your new JAVA class should be ListStats. Write a program that will read in a list of positive integers (including zero) and display some statistics regarding the integers. The user will enter the list in sorted order, starting with the smallest number and ending with the largest number. Your program must store the all of the integers in an array. The maximum number of integers that you can expect is 100, however there may not necessarily be that many. Your program should allow the user to continue entering numbers until they enter a negative number or until the array is filled - whichever occurs first. If the user enters 100 numbers, you should output a message stating that the maximum size for the list has been reached, then proceed with the calculations.• Show less.
• First, ask the user to enter an integer to search for.
• Next, search the array to determine if the given integer is in the array.
• If the integer is not found, display a message stating that the integer is not in the list.
• If the integer is found, then display the position number of where you found the integer. If the integer happens to be in the array more than once, then you only need to tell the first position number where you found it.
• After performing the search, ask the user if he/she wants to search for more integers. Continue searching for numbers until the user answers "N".
Technical notes and restrictions:
• Remember that the keyboard object should be declared as a class variable (global).
• You are only allowed to use the number 100 one time in your program! You are not allowed to use the numbers 101, 99, 98, or any other number that is logically related to 100 anywhere in your program. If you need to make reference to the array size then, use the length variable to reference this rather than hard-coding numbers into your program.
• You are required to write and use at least FOUR methods in this program:
• A method to get the numbers from the user, and place them into an array. This method should return the number of integers that it actually retrieved. The array will be passed back through a parameter (remember that the method can make changes to an array's contents and the main program will automatically see the results).
• A method to calculate, and return, the median of the array
• A method to calculate, and return, the average of the numbers in the array
• A method to search for a specified value in the array. This method should return the first position number where the item is found, or return -1 if the item is not found.
• The two calculating methods, and the searching method, should not get any values from the user, and should not print anything to the screen. The main program is responsible for all of the printing.
The method which gets the numbers will need (of course) to get values from the user, but should not print anything to the screen.
• Absolutely NO global variables (class variables) are allowed except for the keyboard object.
• Remember that at this stage, COMMENTING IS A REQUIREMENT! Make sure you FULLY comment your program.
You also should include comments that explain what sections of code is doing. Notice the key word "sections"! This means that you need not comment every line in your program. Write comments that describe a few lines of code rather than "over-commenting" your program.
• You MUST write comments about every method that you create. Your comments must describe the purpose of the method, the purpose of every parameter the method needs, and the purpose of the return value (if the method has one).
• Build incrementally. Don't tackle the entire program at once. Write a little section of the program, then test it AND get it working BEFORE you go any further. Once you know that this section works, then you can add a little more to your program. Stop and test the new code. Once you get it working, then add a little bit more.
• Make sure you FULLY test your program! Make sure to run your program multiple times, inputting combinations of values that will test all possible conditions for your IF statements and loops. Also be sure to test border-line cases.1 answer
- Anonymous askedR1: (ISBN, Title, PublisherName, Price, StockCnt, OrdNo, OrderDate... Show moreConsider the following relations: • Show less2 answers
- Anonymous askedI am in big time need of the answer to the COBOL question I have had posted for 22 hours. Provide th... More »1 answer
- Anonymous askedI have to write a code for a maze. I have already completed to core of it. However, I can't get the... Show moreI have to write a code for a maze. I have already completed to core of it. However, I can't get the backtracking portion done. It checks to see if there is a '.' in the current location if there is then move there. If it eventually reaches a deadend start over and try again with moves not already tried. I know I am missing a boolean variable or something. Can someone please help me?
public boolean solveMaze(int x, int y)
{
maze1.createPath(x,y);//places an x on current location
mazeSolve=maze1.copyArray();//makes a copy of current array
if(x == endx && y == endy)//base case this is the exit
return true;
else if (mazeSolve[x+1][y]=='.')//moves down
{
solveMaze(x+1,y);
}
else if (mazeSolve[x][y+1]=='.')//moves right
{
solveMaze(x,y+1);
}
else if (mazeSolve[x-1][y]=='.')//moves up
{
solveMaze(x-1, y);
}
else if (mazeSolve[x][y-1]=='.')//moves left
{
solveMaze(x,y-1);
}
else{
maze1.removePath(x,y);//replaces 'x' with '.'
mazeSolve=maze1.copyArray();
}
return false;
}//end recursion • Show less1 answer
- Anonymous asked1 answer
- Anonymous askedI have to create a random maze array. A 2D array with 'x' as the walls. The first column must contai... Show moreI have to create a random maze array. A 2D array with 'x' as the walls. The first column must contain one '.' for the entrance and the greatest column must contain '.' as the exit. Inside the maze should either be 'x' or '.' with '.' representing the path. 1/3 chance it is 'x' and 2/3 chance it is '.'
I am stuck on this. I so far have:
System.out.print("\nCreating Random Maze.\n");
int ranRow, ranCol;//used to pick random row and col
Random rand = new Random();
ranRow = rand.nextInt(7)+1;// to be generatedd within range 6 to 12
ranCol = rand.nextInt(7)+1;
MazeArray = new char[ranRow][ranCol];
for (int i = 0; i < MazeArray.length; i++)
for (int j = 0; j < MazeArray[0].length; j++){
MazeArray[i][j]='X';
Can someone please help!?!?!? • Show less1 answer
- Anonymous askedIn Java, multiple inheritance is implemented using... Show moreCan a Java class inherit from two or more classes?
In Java, multiple inheritance is implemented using the concept of _________?
What Java keyword is used in a class header when a class is defined as inheriting from an interface? • Show less1 answer
- Anonymous askedpublic F(String f, Strin... Show morepublic class F
{
private String first;
protected String name;
public F( )
{ }
public F(String f, String n)
{
first = f;
name = n;
}
public String getFirst( )
{
return first;
}
public String toString( )
{
return ( "first: " + first + "\tname: " + name );
}
public boolean equals( Object f )
{
if ( ! ( f instanceof F ) )
return false;
else
{
F objF = ( F ) f;
return( first.equals( objF.first ) && name.equals( objF.name ) );
}
}
public interface I
{
public static final String TYPE = "human";
public abstract int age( );
}
The G class inherits from the F class. Code the class header of the G class.
// your code goes here • Show less1 answer
- Anonymous askedYou cod... Show morepuiblic abstract class M
{
private int n;
protected double p;
public abstract void fool( );
}
You coded the following class:
public class P extends M
{
}
When you compile, you get the following message:
P.java:1: P is not abstract and does not override abstract method
fool() in M
public class P extends M
^
1 error • Show less1 answer
- Anonymous askedproviding... Show more
Create a Microsoft Word table showing each layer of the OSI model in the first column and• Show less
providing a one paragraph description of that layer in the second column.
3. Add to your table a column showing which pieces of hardware most commonly manage each
layer.
4. Add to your table a column showing the protocols employed at each level.
5. Add a column to your table showing how data is built from layer to layer (Hint:-use the
landscape view page layout)1 answer
- Anonymous asked1 answer
- Anonymous askedCompare and... Show moreWhat is syntax of a programming language and what do you mean by semantics? (10 points)
Compare and contrast Structural Analysis and Design and Object Oriented Analysis and Design. (10 points)
Explain briefly what Waterfall model, incremental model and spiral model mean. Explain in your own words where you think each of these methodologies would be preferred and why. (30 points) • Show less1 answer
- EagerTree4901 askedEnsure yo... Show moreDirections
Points
The files must be called <YourNameProg7.java>.
Example: JimRobbinsProg7.java.
Total Points
80 • Show less1 answer
- Anonymous askedSuppose we want to extend the Echo class to a class called DoubleEcho, which echoes the text file... Show more.
Suppose we want to extend the Echo class to a class called DoubleEcho, which echoes the text file to the console as Echo does, but does so in double-spaced format. Thus if the file sample.txt holds these lines:
The best things in life are freeA stitch in time saves nineStill waters run deep
the console should display:
The best things in life are freeA stitch in time saves nineStill waters run deep
[Note: To double-space the output, insert a blank line between consecutive lines of text.]
/JavaCS1/src/echo/scanner/Echo.java
/JavaCS1/src/echo/scanner/EchoDouble.java
/JavaCS1/src/echo/scanner/EchoTester.java
/JavaCS1/src/echo/sample.txt
import java.io.*;
public class EchoDouble extends Echo
{
public EchoDouble (String datafile) throws IOException
{
super(datafile);
}
// Prints the given line and inserts a blank line
// Overrides the processLine method in Echo class
public void processLine(String line)
{
put your code here
} //end main
} //end class
2. In this example we extend the Echo class to a class called EchoNoSpace. Here the external text file is written to the console as before, but now all spaces are removed first before displaying each line. Thus a line like
The best things in life are free
becomes
Thebestthingsinlifearefree
The code for doing this uses a StringTokenizer object. The link below gives the full class, and the answer block below provides the implementation for the processLine() method in that derived class.
The EchoNoSpace class has an addtional integer attribute, wordCount, which is initialized to 0. After the file has been processed, we would like this variable to hold the exact number of words (tokens) in the file. The EchoTester code at the link below reports this value as follows:
System.out.println("wordCount: " + e.getWordCount());
For this assignment, edit the processLine() code in the answer box so that the EchoTester's main() method reports the correct value for the number of words in the file.
/JavaCS1/src/echo/scanner/Echo.java
/JavaCS1/src/echo/scanner/EchoNoSpace.java
/JavaCS1/src/echo/scanner/EchoTester.java
/JavaCS1/src/echo/sample.txt
import java.io.*;
import java.util.StringTokenizer;
public class EchoNoSpace extends Echo
{
// the number of the words counted
private int wordCount;
public EchoNoSpace (String datafile) throws IOException
{
super( datafile);
wordCount=0;
}
// Prints the given line without any spaces.
// Overrides the processLine method in Echo class
public void processLine(String line){
put your code here
} //end main
} //end class
3. Now let's extend the Echo class to a class called EchoLongest, which includes a method called printLongest(). This method prints the longest line in the file. (Look at the file EchoTester.java below. It shows a call to printLongest() in main).
We've provided EchoLongest with a String attribute called longest, which will hold the longest line seen so far in the external file. Using the longest variable, implement processLine() in the EchoLongest class so that printLongest() behaves correctly when called from the EchoTester class. (Note: in case of ties, your code should return the first occurence of a longest line.)
/JavaCS1/src/echo/scanner/Echo.java
/JavaCS1/src/echo/scanner/EchoLongest.java
/JavaCS1/src/echo/scanner/EchoTester.java
/JavaCS1/src/echo/sample.txt
import java.io.*;
public class EchoLongest extends Echo
{
// the current longest line
private String longest;
public EchoLongest (String datafile) throws IOException
{
super(datafile);
longest="";
}
// Sets the given line as the current longest if the line is
// longer than the value stored in the longest.
// Overrides the processLine method in Echo class
public void processLine(String line){
put your code here
}
public void printLongest(){
System.out.print(longest);
} //end method
} //end class
4.
Suppose an external file is made up entirely of integers. In the model we've been using in this unit, the file is actually read in, line by line, as a sequence of Strings. Using a StringTokenizer inside processLine() method can produce individual tokens that look like numbers, but are actually Strings that are composed of digits. To convert each token from String format to integer format, you need to use the parseInt() method from the Integer class. Thus
int birthYear = Integer.parseInt("1983");
correctly stores the integer 1983 into the birthYear integer variable.
For this assignment you should create a complete processLine method in the EchoSum class, so that it transforms each number in each line to integer format, and then sums the entries on the line and prints their sum on a separate line. For example, if the external text file looks like this:
1 2 3 4 3 4 7 1 112 3
your program should display this:
10265
/JavaCS1/src/echo/scanner/Echo.java
/JavaCS1/src/echo/scanner/EchoSum.java
/JavaCS1/src/echo/scanner/EchoTester.java
/JavaCS1/src/echo/number.txt
import java.io.*;
import java.util.*;
public class EchoSum extends Echo
{
public EchoSum (String datafile) throws IOException
{
super(datafile);
}
// Prints the sum of each line.
public void processLine(String line){
put your code here
} //end method
} //end class
Echo class:
import java.util.Scanner;
import java.io.*;
public class Echo{
String fileName; // external file name
Scanner scan; // Scanner object for reading from external file
public Echo(String f) throws IOException
{
fileName = f;
scan = new Scanner(new FileReader(fileName));
}
// reads lines, hands each to processLine
public void readLines(){
while(scan.hasNext()){
processLine(scan.nextLine());
}
scan.close();
}
// does the real processing work
public void processLine(String line){
System.out.println(line);
}
} • Show less1 answer
- Anonymous asked1.... Show moreI am currently attempting to write a method for a maze. I pass the starting point to the method.
1. I check to see it is within bounds if it isn't then it doesn't work
2. I check to see if it is the end point. If it is then exit method. If it is:
3. I check up down left and right
My code is:
public boolean solveMaze(int x, int y)
{
boolean solve=false;
if ( x<0 || x>=mazeSolve.length || y<0 || y>=mazeSolve[0].length )
solve=false;
if (x == endx && y==endy)//base case
{
solve = true; //the maze is solved
}
if(!solve){
if(mazeSolve[x+1][y]=='.')//moves down
solve=solveMaze(x+1,y);
if(mazeSolve[x][y+1]=='.')//moves right
solve=solveMaze(x,y+1);
if(mazeSolve[x-1][y]=='.')//moves up
solve=solveMaze(x-1,y);
if(mazeSolve[x][y-1]=='.')//moves left
solve=solveMaze(x,y-1);
}
return solve;
}//end solveMaze
It compiles but when I run it I get:)
lines 88 and 92 are:
Line 88:
solve=solveMaze(x+1,y);
Line 92:
solve=solveMaze(x-1,y);
I can't get this right...please someone help!! • Show less1 answer
- Anonymous askedmain()
{
int i = 0;
if (1 % 2)
i += 2;
else
i++;
}
What is the final value of i?
It doesn't make any se... Show moremain()
{
int i = 0;
if (1 % 2)
i += 2;
else
i++;
}
What is the final value of i?
It doesn't make any sense to me without conditional operators included • Show less1 answer
- alienbob21 askedUsing the PersonTy... Show more
Write in C++ code, compile make sure it has no warnings. I'd really appreciate it!• Show less
Using the PersonType class, extend it to keep track of when the person is created in the constructor.
Add a constructor allowing specification of the time (year, month, day, hour, minute)
Use the time.cpp for code which gets the current time to allow the current constructor to default to the present time.
Person type class files
//personType.h
#include
using namespace std;.
personType(string first = "", string last = "");
//Constructor
//Sets firstName and lastName according to the parameters.
//The default values of the parameters are null strings.
//Postcondition: firstName = first; lastName = last
private:
string firstName; //variable to store the first name
string lastName; //variable to store the last name
};
//personTypeImp.cpp
#include
#include
#include "personType.h"
using namespace std;
void personType::print() const
{
cout << firstName << " " << lastName;
}
void personType::setName(string first, string last)
{
firstName = first;
lastName = last;
}
string personType::getFirstName() const
{
return firstName;
}
string personType::getLastName() const
{
return lastName;
}
//constructor
personType::personType(string first, string last)
{
firstName = first;
lastName = last;
}
#include
#include
using namespace std;
time.cpp files
int main()
{
time_t rawtime;
time ( &rawtime );
struct tm * timeinfo = localtime ( &rawtime );
cout << " Hour: " << timeinfo->tm_hour << endl
<< " Minute: " << timeinfo->tm_min << endl
<< " Second: " << timeinfo->tm_sec << endl
<< " Day of the month: " << timeinfo->tm_mday << endl
<< " Month: " << timeinfo->tm_mon << endl
<< " Year: " << timeinfo->tm_year << endl
<< " Day of the week: " << timeinfo->tm_wday << endl
<< " Day of the year: " << timeinfo->tm_yday << endl
<< " Is daylight saving time: " << timeinfo->tm_isdst << endl;
system("pause");
return 0;
}
//Test Program personType
#include
#include
#include "personType.h"
using namespace std;
int main()
{
personType student("Lisa", "Regan");
student.print();
cout<
return 0;
}1 answer
- Anonymous asked2 answers
|
http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2011-july-21
|
CC-MAIN-2014-52
|
refinedweb
| 4,981
| 64.61
|
I am sure many of you by now would have encountered scenarios to send multiple attachments within a Mail message to recipients. But our Mail Adapter as such treats them as “Untitled.xml” which is not acceptable.
Also in some cases we would be requiring to send data to different Mail servers like (Domino – lotus Notes) and all which will interpret the attachments in different way than standard Microsoft Outlook which is widely used.
There are many blogs in the SDN on this front using Java Mapping and Java UDF. One by my good friend Praveen Gujjeti as well ;).
Now lets see on how we can achieve this without using any CODING at all and using only standard function modules – ya correct i know at this point everyone reading this would be bit excited :):).
Scenario: let’s say there is a scenario where there are 5 files. Out of this 1 file will act as main payload which contains the from and To Email addresses, Subject and Body of a mail. And rest of them is txt files which need to be transmitted to the recipients as mail attachments.The files are as below.
File Names: Email.xml ,Email_TextFile1.txt,Email_TextFile2.txt,Email_TextFile3.txt,Email_TextFile4.txt
In here “Email.xml” will act as the main payload file which will contain the Email address, subject and body of the mail and using Mail packaging we will extract the information to the receiver Mail adapter.
And Email_TextFile1.txt, Email_TextFile2.txt,Email_ TextFile3.txt & Email_TextFile4.txt are the files which need to be attached to the email.
For Information on Mail Packaging ref: Receiver Mail Adapter
Here below i will show you 4 steps:
- How to configure sender adapter to pick up Additional Files as attachments.
- Message Mapping for Mail packaging.
- How to get the attachment Names from Payload – Integration Engine.
4. Receiver Mail adapter configuration for getting multiple attachments in proper format.
Sender Adapter Configuration for picking multiple files:
Things to rememebr:
- Addition Files only works in NFS mode. So make sure you find a way to temporarily place the files in NFS of PI.
- Here the property “*.optional” makes sure that the main payload “Email.xml” is not executed until all the attachment is available for pickup from the location. If set as YES then even if additional files do not exist. The mail payload will be processed by the channel.
Message Mapping Configuration for Mail packaging:
Things to remember:
- Receiver mail structure is standard. So you have to define under namespace: “” and Message Type should be “Mail” we cannot use any other namespace nor Message type for sending parameter for mail adapter using Mail packaging.
- Make sure from and To address are specified with valid email id’s.
- Specify Content_Type as “text/plain” without quotes.
4. Make sure each Email address is separated with a semicolon like: abc@xyz.com;fgh@xyz.com
5. Also Subject and Content fields will contain the “to-be” subject and content of the mail respectively.
How to Get Attachment Names from SXI_MONITOR
This is important. Here we will find the Attachment names of the files which were picked up from the source. (these will be same as what we specified in File List in sender file adapter).
You can confirm the same from SXMB_MONI–> Open the Message –> Inbound Message –>Soap Body –> Manifest
Here in the XML you will be able to see the attachment Names.
If you study carefully the ApplicationAttachment (additional Files) names have come without any change (as what we specified in the File List – sender communication channel).
The main XML (Application) file Email.xml has been renamed to “MainDocument” as this is the main payload of the message.
Hence in this scenario, we will use MainDocument (to select “Email.xml” file), TextFile1 (to select “TextFile1.txt” file),TextFile2 (to select “TextFile2.txt” file ),TextFIle3 (to Select “TextFile3.txt” file) and TextFile4(to select “TextFile4.txt” file) from the payload to bring them to required format.
Receiver Mail Adapter Configuration:
Things to remember:
- Here make sure Message Protocol is “XIPAYLOAD” , use Mail Package and Keep Attachments are checked as we are using Mail packagingand we do have attachment with the payload.
Now comes the task of preserving the format and content of the text file:
What we are actually doing:
- Selecting each attachment payload one by one using payloadswapbean.
- Convert the selected attachment payload from step 1 as MainPayload (inline) using the module MessageTransformationBean and explicitly name the file with its appropriate name using the property “filename”.
- Perform step 1 to 2 for all the rest of the attachments in the payload as shown in the screenshot above.
- swap5 and trans5 are used to make the original payload “MainDocument” as the Main payload at the end, as this should be passed to “Mail” adapter module and it contains the Email address ,subject and content of the mail which is needed by the adapter (as we are using Mail packaging). – this step is important.
Mail Result:
Alternate Scenario:
Well what if the attachment file names are dynamic and will have a time stamp or something to it.
well then,
1. In sender adapter user PayloadZip bean and zip all the files into one zip file.
2. use an UDF to read the zip file in Mapping if using Mail packaging.
3. perform the same operation in receiver mail adapter mentioned above to rename the zip file attachment.
In this way the zip file will contain multiple files within it unchanged.
Senthilrpakash Selvaraj
|
https://blogs.sap.com/2013/12/19/sending-multiple-attachments-within-a-single-receiver-mail-adapter-in-required-format-txtbinjpgetc/
|
CC-MAIN-2017-51
|
refinedweb
| 919
| 64.2
|
externkeyword is used when you are using global variables. You never actually use it, though, because you never actually use global variables.
statichas two uses - if you see a static function and it is not part of a class, then it was probably written before the use of static like this was deprecated. It means the same thing as writing the function inside an anonymous namespace; it is only visible to that source file and no others.
staticmember function or member variable in a class, then that means it is associated with the class and not any one instance. You can access it with NameOfClass::NameOfMember.
void Calc1::Sub1(){}? And the same for
void Calc1::Sub2(int Num){}
Calc1::Sub2, see above.
externkeyword in headers. Example:
|
http://www.cplusplus.com/forum/beginner/93429/
|
CC-MAIN-2016-30
|
refinedweb
| 126
| 64.61
|
.5: Objects Everywhere
About This Page
Questions Answered: What can I do with a
String object? How do
I parse string data — a star catalog, say? So,
Strings are
objects — what else is an object?
Topics: Several earlier topics are revisited from an object-oriented perspective. Secondary topics include operator notation, string interpolation, and package objects.
What Will I Do? Read and work on short assignments.
Rough Estimate of Workload:? Two hours?
Points Available: A55.
Related Projects: Miscellaneous, Stars.
Notes: This chapter makes occasional use of sound, so speakers or headphones are recommended. They aren’t strictly necessary, though.
Introduction
In Chapter 2.1, we undertook to learn object-oriented programming. Since then, you’ve
created many classes and objects and used classes and objects created by others. Along
the way, it has turned out that many of the basic building blocks that we use in Scala
are objects and have useful methods; these objects include vectors and buffers (Chapter 4.1)
and
Options (Chapter 4.2).
Scala is a notably pure object-oriented language in the sense that all values are objects and the operations associated with those values are methods.
How is that? After all, we’ve also used “standalone” functions such as
max and
min
and written such functions ourselves. They weren’t methods of any object, right? And we
have
Ints and
Booleans and the like, which work with operators rather than methods,
right?
We’ll see.
Numbers as Objects
The data type
Int represents integer values. This type, too, is in fact a class and
individual integers are
Int objects, even though we haven’t so far viewed them from
that perspective in O1.
It’s true that
Int isn’t quite a regular class. One unusual thing about this type is
that you don’t need to instantiate it with
new; instead, you get new values of type
Int from literals. The
Int class is defined as part of the core package
scala,
which makes it an inseparable part of the language itself. Some of the classes in this
package,
Int being one of them, are associated with “unusual” features such as a
literal notation.
That being said,
Int is a perfectly normal class in the sense that it defines a number
of methods that you can invoke on any
Int object.
val number = 17number: Int = 17 number.toBinaryStringres0: String = 10001
What we did there is call
toBinaryString, a method that returns a description of the
integer in the binary (base 2) number system. The base-ten integer 17 happens to be 10001
in binary format.
You can even invoke a method on an integer literal:
123.toBinaryStringres1: String = 1111011
The
to method
Int objects have a method called
to, which returns an object that represents all the
numbers between the target object (
2 in the example below) and the parameter object (
6).
val numbersFromTwoToSix = 2.to(6)numbersFromTwoToSix: Range.Inclusive = Range 2 to 6 numbersFromTwoToSix.toVectorres2: Vector[Int] = Vector(2, 3, 4, 5, 6)
Rangein our example contains the numbers 2, 3, 4, 5, and 6 in ascending order, which is easier to see by copying the range’s contents into a vector. (The vector takes up more space in the computer’s memory than the
Range, because it stores each element separately whereas a
Rangestores the first and last element only.)
We’ll return to
to a bit further down. And we’ll put it to good use in Chapter 5.4.
The
toDouble method
Here is an example of
toDouble, a frequently used method on
Int objects. It
returns the
Double that corresponds to the
Int.
val dividend = 100dividend: Int = 100 val divisor = 7divisor: Int = 7 val integerResult = dividend / divisorintegerResult: Int = 14 val morePrecisely = dividend.toDouble / divisormorePrecisely: Double = 14.285714285714286 val resultAsDouble = (dividend / divisor).toDoubleresultAsDouble: Double = 14.0
As you know, we don’t strictly need
toDouble to do all that: we could have written
1.0 * dividend / divisor, for example. But
toDouble says straight out what we’re
after.
abs,
min, etc.
Int objects also have methods for many familiar mathematical operations. For example,
we can obtain a number’s absolute value or select the smaller of two integers as shown
below.
-100.absres3: Int = 100 val number = 17number: Int = 17 number.min(100)res4: Int = 17
Operators as Methods
So, numbers are objects and have methods. How do operators fit in?
Look at the documentation of class
Int in the Scala API:
Browsing the documentation, you’ll find the methods introduced above:
toDouble,
to,
abs, and so forth. In the same list, you’ll find methods whose symbolic names look
very familiar:
!=,
*,
+, etc. (You don’t need to otherwise understand the document
at this point.)
Let’s try something:
val number = 10number: Int = 10 number.+(5)res5: Int = 15 (1).+(1)res6: Int = 2 10.!=(number)res7: Boolean = false
Int. We can tell an
Intobject to execute these methods as illustrated. The output is familiar; it’s just that now we used this somewhat more awkward notation to produce it.
Does that mean that we have an operator
+, as in
number + another, and, separately,
a method
+, as in
number.+(another)?
No. What we have is a single method and two ways to write the method call. As it turns
out, the Scala language permits us to leave out the dot and the brackets when calling a
method that takes exactly one parameter. The expressions
number + another and
number.+(another) mean precisely the same thing. Operators are actually methods.
The concept of operator doesn’t even officially exist in Scala. (In many other languages, it does.) But it can be convenient to think of certain Scala methods — such as the arithmetic methods on numbers — as operators and omit the extra punctuation when we invoke them.
Dot notation vs. operator notation
You saw that we can omit the dot and the brackets when calling methods such as
+.
The same also goes for other single-parameter methods. For example, our
Category
class could contain either one of these expressions:
newExperience.chooseBetter(this.fave)
newExperience chooseBetter this.fave
The former is known as dot notation (pistenotaatio) and the latter as operator notation or infix operator notation (operaattorinotaatio).
Each notation has its benefits. Choosing between them is in part a matter of taste.
In this ebook, we use the operator notation very sparingly. This is because dot notation works consistently for all parameter counts and because it highlights the target object and parameter expressions better, which is nice for learning.
We’ll mostly use operator notation for arithmetic, comparisons, and logic operators; we’ll
also use it in a handful of other places where it’s particularly natural. For example, most
of us would probably agree that it’s nicer to call
to on an
Int by writing
1 to 10
rather than
1.to(10).
In your own programs, you’re free to adopt either notation. We recommend that beginners, especially, follow the ebook’s convention. In any case, you should be aware of both notations so that you can read Scala programs written by others. Outside O1, the operator notation is somewhat more common than it is in this ebook.
“Operators” on Booleans and buffers
Logical operators (Chapter 4.4) are methods on
Boolean objects.
The buffer “operators”
+= and
-= are methods, too. You can use either operator or
dot notation to invoke them. The former is probably preferred by all and sundry.
val myBuffer = Buffer("first", "second", "third")res8: Buffer[String] = ArrayBuffer(first, second, third) myBuffer += "fourth"myBuffer.+=("fifth")
GoodStuff revisited
In earlier chapters, you’ve seen interactive animations where GoodStuff’s objects communicate with each other via messages. In those diagrams, buffers or integers didn’t feature as objects.
As is clear by now, they too are objects: for example, an
Experience object calls the
> method on an
Int object to
find out which of two integers is bigger (so as to find out which
experience is better). Communication between objects thus plays
an even greater part in the program than previously discussed.
String Objects and Their Methods
At this point, it barely registers as news that
String is a class, string values are
its instances, and string operators are its methods.
"cat".+("fish")res9: String = catfish
A vector contains elements of some type, each at its own index. A string contains
characters each at its own index. There is an obvious analogy between the two concepts,
and it makes sense that strings have many of the same methods that vectors and buffers
do. Strings also come with additional methods that are specifically designed for
working on character data. The examples and mini-assignments below will introduce to
some of the methods defined on Scala’s
String objects.
String manipulation is very common across different sorts of programs. You’ll find the methods useful in many future chapters, and in this one, too. Once again, the point is not that you memorize the details of every method; with practice, you’ll learn to remember the common ones. The important thing is for you to get an overview of what sorts of tools are available to you.
The
length of a string
The
length method returns the number of characters in a string.
"llama".lengthres10: Int = 5
Getting a number from a string:
toInt and
toDouble
Often, you’ll need to interpret a string that contains numerical characters as a number.
You might, say, receive a textual input from the user and turn it into a value that you
can use in numerical calculations. For example, given a string that consists of the
characters
'1',`'0', and
'5', you’d like to produce the
Int value
105.
The methods
toInt and
toDouble do the trick:
val digitsInText = "105"digitsInText: String = 105 digitsInText.toIntres11: Int = 105 digitsInText.toDoubleres12: Double = 105.0 "a hundred and five".toIntjava.lang.NumberFormatException: For input string: "a hundred and five" (jne.)
Removing whitespace:
trim
The
trim method is also handy when you need to process data that you’ve obtained
from an external source. It returns a string in which space characters and other
*whitespace* have been removed from both ends.
var text = " hello there "text: String = " hello there " println("The text is: " + text + ".")The text is: hello there . println("The text is: " + text.trim + ".")The text is: hello there.
If there is no leading or trailing whitespace,
trim returns an untouched string:
text = "poodle"text: String = poodle println("The text is: " + text.trim + ".")The text is: poodle.
trim can be summarized as “removing empty characters from a string”. To be more
specific, however, the method does not actually change the original string but
produces a new string that contains the same characters as the original except for the
omitted whitespace. All methods on
String objects are effect-free,
trim included.
Every
String object is immutable.
Picking out a character:
apply and
lift
The
apply method takes an index as a parameter and retrieves the corresponding character
from the string. The numbering starts at zero, so the following code checks the fourth and
fifth letters in "llama":
"llama".apply(3)res13: Char = m "llama".apply(4)res14: Char = a
Charclass. A
Stringis a sequence of zero or more characters; a
Charis exactly one character.
There’s also a more succinct way to access a character at an index. Try
"llama"(3), for
instance.
And there is a safer way: the
lift method you know from other collections.
"llama".lift(3)res15: Option[Char] = Some(m) "llama".lift(4) res12: Option[Char] = Some(a) "llama".lift(123)res16: Option[Char] = None
splitting a string
The
split method divides a string into parts at the given separator. In this example,
the separator is a space:
val quotation = "A class is where we teach objects how to behave. ---Richard Pattis"quotation: String = A class is where we teach objects how to behave. ---Richard Pattis val words = quotation.split(" ")words: Array[String] = Array(A, class, is, where, we, teach, objects, how, to, behave., ---Richard, Pattis) words(1)res17: String = class
splitreturns an
Array, a collection of elements similar to a vector or a buffer.
Any string can serve as a separator:
quotation.split("ch")res18: Array[String] = Array(A class is where we tea, " objects how to behave. ---Ri", ard Pattis)
Letter size:
toUpperCase and
toLowerCase
val sevenNationArmy = "[29]<<e--------e--g--. e--. d--. c-----------<h----------->e--------e--g--. e--. d--. c---d---c---<h-----------/360"sevenNationArmy: String = [29]<<e--------e--g--. e--. d--. c-----------<h----------->e--------e--g--. e--. d--. c--- d---c---<h-----------/360 o1.play(sevenNationArmy)
That needs to be played louder.
The function
o1.play plays upper-case notes louder than lower-case ones. There is an
easy way to produce upper-case letters:
o1.play(sevenNationArmy.toUpperCase)
Here’s an additional example of the method and its sibling
toLowerCase:
"Little BIG Man".toUpperCaseres19: String = LITTLE BIG MAN "Little BIG Man".toLowerCaseres20: String = little big man
Comparing strings:
compareTo
Searching for characters:
indexOf and
contains
The methods
indexOf and
contains work much like the corresponding methods on vectors
and buffers (from Chapter 4.1). You can use them to find out if a given substring occurs
within a longer string:
"fragrant".contains("grant")res21: Boolean = true "fragrant".contains("llama")res22: Boolean = false "fragrant".indexOf("grant")res23: Int = 3 "fragrant".indexOf("llama")res24: Int = -1
take,
drop, and company
take,
drop,
head, and
tail are available on strings, too. These methods and
their variants are analogous to the collection methods from Chapter 4.1. Here are a few
examples:
val fromTheLeft = "fragrant".take(4)fromTheLeft: String = frag val fromTheRight = "fragrant".drop(5)fromTheRight: String = ant "fragrant".take(0)res25: String = "" "fragrant".take(100)res26: String = fragrant "fragrant".takeRight(5)res27: String = grant "fragrant".dropRight(5)res28: String = fra
val first = "fragrant".headfirst: Char = f val rest = "fragrant".tailrest: String = ragrant val firstIfItExists = "fragrant".headOptionfirstIfItExists: Option[Char] = Some(f) val firstOfEmptyString = "fragrant".drop(10).headOptionfirstOfEmptyString: Option[Char] = None
Optional assignment: string insertion
In
misc.scala of project Miscellaneous, write an
insert function that adds a
string into a specified location within a target string. It should work as illustrated
below.
val target = "viking"target: String = viking" import o1.misc._import o1.misc._ insert("olinma", target, 2)res29: String = violinmaking insert("!!!", target, 0)res30: String = !!!viking insert("!!!", target, 100)res31: String = viking!!! insert("!!!", target, -100)res32: String = !!!viking
A+ presents the exercise submission form here.
Some Special Features of Strings
Special Characters within a String
Chapter 4.1 mentioned in passing that you can include a newline character (a line break)
in a string by writing
\n. There’s a similar notation for a number of other special
characters. Here are a few of them:
For example:
println("When I beheld him in the " + "desert vast,\n\"Have pity " + "on me,\" unto him I cried,\n" + "\"Whiche'er thou art, " + "shade or real man!\"")
That produces the following output:
When I beheld him in the desert vast, "Have pity on me," unto him I cried, "Whiche'er thou art, shade or real man!"
As you can see, the line breaks appear where the newline characters are.
An alternative is to mark the string’s beginning and end with three double-quote characters. This tells Scala to interpret everything in between, special characters and all, as individual characters, so that there’s no need to use the backslash as an “escape”. The following code produces the same output as the one above.
println("""When I beheld him in the desert vast, "Have pity on me," unto him I cried, "Whiche'er thou art, shade or real man!"""")
Embedding values in a string
Suppose we have a variable
number that stores the integer 10. The expression below
uses a string representation of the variable’s value as part of a longer string. There
is nothing new about this yet.
"The variable stores " + number + ", which is slightly less than " + (number + 1) + "."res33: String = The variable stores 10, which is slightly less than 11.
Sooner or later, you’ll run into Scala code that doesn’t use the plus operator to embed
values within a string but an
s character and dollar signs instead. For instance,
here’s another way to produce the same string as above:
s"The variable stores $number, which is slightly less than ${number + 1}."res34: String = The variable stores 10, which is slightly less than 11.
sjust before the leading quotation mark. It indicates that we’re embedding values within a string literal; this is known as string interpolation.
In this ebook, we’ll mostly stick with the plus operator.
Assignment:
tempo
Introduction
The example below uses a function that takes in similar string of music as
o1.play does
and returns the tempo of that music as an integer.
import o1.misc._import o1.misc._ tempo("cccedddfeeddc---/180")res35: Int = 180 tempo(s"""[72]${" "*96}d${"-"*39}e---f---d${"-"*39}e---f---d${"-"*15}e---f---d${"-"*15}e---f--- f#-----------g${"-"*17}&[62]${" "*104}(>c<afd)--(>c<afd)--------(afdc)--(>c<afd)-------(afdc) ${"-"*17} (>c<abfd)--(>c<abfd)--------(abfdc)--(>c<abfd)-------(abfdc)${"-"*17} (hbgfd) ${"-"*17}(hbg>fd<)${"-"*22}(>ce<hbg)---- (>d<hbge)---------- (>db<hbge)---------- (>c<hbg#e)----- &[29]${" "*96}<<<${"c-----------"*11}cb-----------<${"hb-----------"*3}hb-----&P:a----------- ${"a--------a--a-----------"*11}/480""")res36: Int = 480
The function completely ignores the string’s actual musical content; it just extracts the tempo from the end. If there’s no tempo recorded in the string, the function returns the integer 120:
tempo("cccedddfeeddc---")res37: Int = 120
Task description
Implement
tempo in
misc.scala of project Miscellaneous.
Instructions and hints
- The function must return an
Int, not a
Stringthat contains the digits. The function needs to divide the string, select one of the parts, and interpret it as an integer.
- There are a great many ways to implement the function. Just the
Stringmethods introduced in this chapter suggest multiple solutions. Can you find more than one?
- You may assume that the given string has at most a single slash character
/, no more. You may also assume that in case there is a slash in the string, it’s followed by one or more digits and nothing else. In other words, you don’t have to worry about invalid inputs.
Submission form
A+ presents the exercise submission form here.
A better representation for songs?
In this ebook, you’ve seen some fairly long strings being passed
as parameters to
o1.play. Writing and editing strings such as
these is laborious and attracts bugs. Fortunately, you don’t have to.
If you want, you can reflect on what functions you might create to help you notate songs in O1’s string format.
You may also: 1) consider that other format we could use to represent musical notes in a program; 2) envision an application where an end user can easily write and save songs; and 3) find out what solutions are already out there.
Assignment: Star Maps (Part 3 of 4: Star Catalogues)
We concluded Chapter 4.3’s Stars assignment with this observation:.
The folders
test and
northern of the Stars project each contain files named
stars.csv.
Take a look.
- The file under
testcontains a semicolon-delimited description of a few imaginary stars. You’ll learn what each number means in just a moment.
- The file under
northerncontains a much longer list of actual stars (originally from the VizieR service).
For now, feel free to ignore the other files under
northern.
We’ll now improve our program so that it 1) reads in the star data from the files
as strings, 2) creates
Star objects to represent the information encoded in those
strings, and 3) displays all the stars as graphics. The last of the three subproblems
you already solved in Chapter 4.3. The second subproblem of interpreting the string
data you’ll solve now. The first subproblem of actually handling the file has been
solved for you in the given code. (Further materials on file handling are available in
Chapter 12.2.)
Task description
Run
StarryApp. Be disappointed by its flat black output. Exit the app.
Actually, the app is very close to working nicely, but a crucial method of the
SkyFiles
singleton object is missing. That method,
parseStarInfo, does what was just discussed:
takes in a string that describes a star (as the lines of
stars.csv do) and creates a
corresponding
Star-object.
Read the documentation for
parseStarInfo. It explains what each part of the input
string is expected to contain and which of those parts are significant for your task.
Then implement the method.
Instructions and hints
- The method has been documented as part of the
SkyFilesobject in package
o1.stars.io.
- The folder that the app loads its star data from is named at the top of
StarryApp. As given, the app uses the
testfolder, which is a good choice until you feel confident that your method works. Later, you can change the folder to
northernto produce a more impressive star map.
- You do not have to consider what happens if the input string doesn’t consist of six or seven semicolon-separated parts or otherwise fails to adhere to the specification. Assume that the given data string is valid.
- The program will crash with a runtime error if the selected
stars.csvcontains invalid data. This would obviously be unacceptable in a proper real-world application but it will do for present purposes.
- You can find helpful methods in this very chapter.
- Note that some of the star data (the z coordinate and one of the ID numbers) are irrelevant to this assignment.
Practice on string interpolation
If you wish, you can try string interpolation (see above) as you
implement
toString in class
Star.
Note that if you want to embed the value of
this.myVariable,
you’ll need to use curly brackets or omit
this: write either
s"the value is: ${this.myVariable}'" or
s"the value is: $myVariable".
To further explore this technique, take a look at
toString in class
StarCoords, which rounds the coordinates to two decimal points.
That method uses an
f notation that you can read more about in
an article on different forms of string interpolation.
Submission form
A+ presents the exercise submission form here.
On Package Objects and
import Statements
The remaining topics in this chapter aren’t particularly important for O1 or indeed for learning to program in general, but they can be nice to know for programming in Scala.
Objects as packages
What about the “separate functions” that you have used and written,
such as
sqrt (Chapter 1.6) or
toMeters (Chapter 1.7) or
tempo
(just now)? Those functions weren’t attached to any object, so do
they count as object-oriented programming?
In a way, yes.
Technically, even those functions were methods, even though it didn’t seem like it. This fact has its basis in two things.
One: in Scala, we can import methods from an object. For
example,
import mysingleton._ imports all the methods of the
mysingleton object. That means that we can then omit the dot
and the object’s name as we call the method, and our method
calls won’t look like method calls.
Two: we can define a singleton object that is meant to be
imported from and that contains an assortment of methods that
are more or less related to each other. Such an object is termed a
package object (pakkausolio).
In this chapter, for example, you defined some functions in an object defined as follows.
package o1 object misc { // Your code went here. }
What you did, then, was define the functions as methods of a
package object named
misc. After importing those methods with
import o1.misc._, you could invoke them as if no target object
was involved at all.
The Scala API features various package objects. One of them is
math, which defines
sqrt and other familiar functions.
So, in a technical sense, these functions, too, are methods on singleton objects. This is consistent with Scala’s pure object-oriented design.
However, in practice, we commonly don’t think about the methods on package objects as being methods. In a sense, that’s exactly what a package object is: an object whose object-ness we can “overlook”.
On package objects and object-orientation
If we were to design an entire piece of software so that we use only package objects and their methods, our design won’t capture the spirit of object-oriented programming. When using Scala in an object-oriented fashion, it’s customary to make limited use of methods in package objects.
For example, the package object
scala.math is more the exception
than the rule; most of the Scala API relies on classes that we
instantiate. And indeed even many of that package object’s contents
are also available as methods on instances: for example, you can
get the absolute value of a number either “function-style” with
abs(a) or “object-style” with
a.abs. Take your pick.
println,
readLine, and package objects
The familiar
println function is actually a method on a singleton
object named
Predef. This one particular object has an elevated
status in Scala: its methods can be called in any Scala program
without an
import and without an explicit reference to the object.
As for
readLine, you learned to use it in Chapter 2.7 by first
importing scala.io.StdIn._`. As you did that, you essentially
used a singleton object
StdIn as a package object.
importing just about anywhere
In the ebook’s examples, we have generally placed any
import
statements at the top of the Scala file. This is a common practice.
However, Scala also lets you
import locally within a particular
class or object or even an individual method.
import myPackage1._ class X { import myPackage2._ def myMethodA = { // Here, you can use myPackage1 and myPackage2. } def myMethodB = { import myPackage3._ // Here, you can use myPackage1, myPackage2, and myPackage3. } } class Y { // Here, you can use only myPackage1. }
Such local
imports sometimes make code easier to read.
importing from an instance
As noted above, you can use a singleton object like a package and
import its methods. As a matter of fact, you can even do the same to
an instance of a class, if you’re so minded.
class Human(val name: String) { val isMortal = true def greeting = "Hi, I'm " + this.name }defined class Human val soccy = new Human("Socrates")soccy: Human = Human@1bd0b5e import soccy._import soccy._ greetingres38: String = Hi, I'm Socrates
The last command issued above is actually a shorthand for
soccy.greeting.
However, in most cases
importing from instances like this is just
liable to make your code harder to read.
Summary of Key Points
- As an object-oriented language, Scala is pure: all values are objects.
Int,
Double, and
String, for example, are classes and values of those types are objects. These objects have a variety of frequently useful methods.
- The so-called operators are actually methods. In Scala, there are two ways to call a method: dot notation and operator notation.
- Links to the glossary: object-oriented programming; dot notation, operator notation; string interpolation; package object.
Does that go for other languages, too?
Scala is certainly not the only language where “everything is an object”. However, it does differ in this respect from some other object-oriented languages in common use. In the Java programming language, for example, objects and classes are used for a variety of data, but some fundamental data such integers and Booleans are instead represented as so-called primitive types. Scala’s all-encompassing object system eliminates special cases and gives the language a more consistent feel..
Range. A range is an collection of numbers; like a vector, it is immutable.
|
https://plus.cs.aalto.fi/o1/2018/w04/ch05/
|
CC-MAIN-2020-24
|
refinedweb
| 4,643
| 58.28
|
IM_ARRAY, IM_NEW, IM_NUMBER - memory allocation macros
#include <vips/vips.h> type-name *IM_NEW( IMAGE *im, type-name ) type-name *IM_ARRAY( IMAGE *im, int number, type-name ) int IM_NUMBER( array )
NEW, NUMBER and ARRAY are macros built on im_malloc(3) which make memory allocation slightly easier. Given a type name, NEW returns a pointer to a piece of memory large enough to hold an object of that type. ARRAY works as NEW, but allocates space for a number of objects. Given an array, NUMBER returns the number of elements in that array. #define IM_NEW(IM,A) ((A *)im_malloc((IM),sizeof(A))) #define IM_NUMBER(R) (sizeof(R)/sizeof(R[0])) #define IM_ARRAY(IM,N,T) ((T *)im_malloc((IM),(N) * sizeof(T))) Both IM_ARRAY and IM_NEW take an image descriptor as their first parameter. Memory is allocated local to this descriptor, that is, when the descriptor is closed, the memory is automatically freed for you. If you pass NULL instead of an image descriptor, memory is allocated globally and is not automatically freed. (NOTE: in versions of VIPS before 7.3, NEW(3) and ARRAY(3) did not have the initial IMAGE parameter. If you are converting an old VIPS7.2 program, you will need to add a NULL parameter to the start of all NEW(3) and ARRAY(3) parameter lists.) Both functions return NULL on error, setting im_errorstring. Example: #include <vips/vips.h> /* A structure we want to carry about. */ typedef struct { ... } Wombat; /* A static array of them. */ static Wombat swarm[] = { { ... }, { ... }, { ... } }; static int swarm_size = IM_NUMBER( swarm ); int transform_wombat( IMAGE *in, IMAGE *out ) { /* Allocate space for a Wombat. */ Wombat *mar = IM_NEW( out, Wombat ); /* Allocate space for a copy of swarm. */ Wombat *mar = IM_ARRAY( out, swarm_size, Wombat ); .... }
National Gallery, 1993
im_malloc(3), im_open_local(3).
J. Cupitt - 23/7/93 11 April 1993 IM_ARRAY(3)
|
http://huge-man-linux.net/man3/IM_NEW.html
|
CC-MAIN-2017-13
|
refinedweb
| 301
| 59.3
|
++.
Types
C and C++ share a set of common base types and constructs that can easily be used, and others that have subtle differences:
- Integer types such as
char,
short,
int,
long, … share representation and semantics. But beware that
bool(or
_Bool) and enumeration types need special care, see below.
- All floating point types have the same representation and the syntax for
float,
doubleand perhaps
long doubleare the same. But complex types have different syntax. C adds a some sort of specifier the real base type, C++ has them as
templatetypes.
- Array types have the same syntax and representation, but C allows the array bounds to be dynamic, so-called variable length arrays, VLA.
- Structure and union types, have the same representation, as long as they don’t declare function members. (C++ calls these POD, plain old data structures.)
- Atomic types have the same representation and semantics, but different syntax. C++ has them as templates. C has two alternative forms, as a type qualifier or as a type specifier.
Booleans
The Boolean type in C is “officially” called
_Bool but a convenience macro exists in that defines
bool to point to this same type. In fact, this construct only has been invented to ensure backwards compatibility for code that had been written before the introduction of the Boolean type. It might eventually be remove from a future version of the C standard. For C++, using
_Bool makes not much sense, it is ugly and introducing a C feature to C++ that is just meant as temporary (though for a looooong time) will not happen.
As a consequence the easiest way to use Booleans is to use
bool throughout, and to ensure that C sees the corresponding header include. This can easily be achieved:
#ifndef __cplusplus # include<stdbool.h> #endif extern bool weAreHappy;
As said, you should not try to include this header to C++, it just makes no sense.
Enumeration types
Plain enumeration types themselves should be compatible between C and C++. Enumeration constants have the same value, but have different types. In C they are of type
int, C++ they are of the enumeration type itself.
As long as you use the constants for their values, all should work out perfectly. But if you try to use variables or function parameters of enumeration types, the difficulties start. C and C++ have different rules for implicit conversion from an to these types, so you better avoid using them, here.
Atomics
In C++ an atomic type of some base type is specified as a
template:
extern std::atomic< unsigned > flags;
In C there are two equivalent writings for this:
extern unsigned _Atomic flags; // an atomic qualifier extern _Atomic(unsigned) flags; // an atomic specifier ...
In common code between C and C++ the latter can be used to accommodate both languages:
#ifdef __cplusplus # include <atomic> # define _Atomic(T) std::atomic< T > #else # include <stdatomic.h> #endif extern _Atomic(unsigned) flags; ...
Complex types
Again, different complex types are specified as
template types in C++:
extern std::complex< double > angle;
In C the equivalent writing for this is:
extern complex double angle;
Both languages guarantee that these types are laid out as two consecutive objects of their base, the first for the real part the second for the imaginary part.
Unfortunately there is no syntax that would be similar to the atomic specifier, above, that would allow to use a simple macro. On the other hand, there are not so many complex types and just using in-place names for the them can save us
#ifdef __cplusplus # include <complex> typedef std::complex< float > cfloat; typedef std::complex< double > cdouble; typedef std::complex< long double > cldouble; # define I (cfloat({ 0, 1 })) #else # include <complex.h> typedef complex float cfloat; typedef complex double cdouble; typedef complex long double cldouble; #endif extern cdouble angle; ... cdouble angle = 4.0 + 3.0*I;
Also beware that common code that uses C and C++ complex types should not use the identifier
I for other purposes than the complex root of
-1.
Objects
The ABI compatibility between C and C++ implies that objects that have any of the shared types have the same object representation, that is that their layout of their bytes in memory and the interpretation of these bytes are the same. Syntactically, named objects, that is variables and function parameters, are declared the same. Otherwise the whole idea of a common interface specification would be hopeless.
Temporaries
But C and C++ differ much in their notion of unnamed, temporary objects. In C there are two different sorts of temporary objects:
- Return values of functions that contain array types return a temporary object such that the array elements can be addressed through pointer arithmetic. Such temporary objects are unmutable and cease to exist as soon as the expression that contains the function call is terminated.
- Compound literals are temporaries that are explicitly created. They act as if a variable had been declared in the current scope, and generally all other rules for such variables apply.
In C++, there is no equivalent for the latter. Even temporaries that are explicitly constructed by calling a constructor, will in general only be alive during the evaluation of the expression where they appear. Taking references to such objects can extend their lifetime, though, but the rules here are quite complicated and references are out of scope for common C and C++ interfaces, anyhow.
So using temporaries in interfaces is basically to be ruled out whenever this would be used in C to provide the address of an object to a function that will return that address for further use in the current scope.
See also variable argument lists and macros, below.
Const qualified objects
In C++ you can place
const qualified objects in header files such that they act as constants for the underlying type.
constexpr unsigned const fortytwo = 42u;
This construct (even wihtout
constexpr) is not allowed in C since it will result in the definition of a
fortytwo object in all .o files, and thus violate the one definition rule. You could, kind of, emulate that feature by declaring the object
static, but
- That would result in multiple copies, that would reside ad different addresses. Programs that compare pointers to such objects could get confused.
- In C,
staticobjects cannot be used from
inlinefunctions.
For the common interface we are bound to macros. For types that have literals this is simple
#define fortytwo 42u
For structure types, there is no common solution to this. For C you would use a compound literal inside a macro, for C++ you would use a
const qualified global object.
Functions
For the common types that we listed about the function call ABI on a given platform should be the same. That is regardless of the language through which we access the interface, the same rules of representing function parameters and return values in hardware register or on the stack will apply. The first important difference between C and C++ that apply is that C++ has function overloading and therefore has to mangle the types of the arguments in the external name, unless you ask it not to do this. The common idiom to ensure this is
#ifdef __cplusplus extern "C" { #endif int toto(void); double hui(char*); #ifdef __cplusplus } #endif
That is we have two C++ zones surrounding the common interface specification, declaring the functions to be “extern” with language interface “C”. The macro
__cplusplus is guaranteed to be undefined by any C compiler and is guaranteed to be defined by any C++ compiler.
Functions without parameters
In C, a function interface that is declared syntactically with a empty parameter list in fact has no prototype. Such a function could take any amount and (almost) any type of parameters. There a special rules how arguments to such functions are converted on the calling side.
An function that is known to receive no arguments must therefore be specified differently, namely with
void as specification for the parameters, such as function
toto, above.
Functions that have VLA parameters
A common C idiom to specify a function that receives a 2-dimensional matrix would look like
void initialize(size_t n, size_t m, double A[n][m]);
There is no syntax in C++ to support this feature, so we cannot use it as such in any common interface specification.
Theoretically C++ could use such a function, too, because the ABI underneath is just two
size_t and a pointer to
double. The C type information for the matrix is only assembled at the beginning of the function execution, the caller has nothing to provide in addition to the arguments.
We could be tempted to present a “fake” interface to C++, that only requests a
double*, but such a cheat can easily backfire, if we use it wrong, accidentally. Better create a small wrapper that receives an appropriate
vector template.
Multidimensional array parameters with
const qualification
The rules for compatibility between types with different qualifications differ between the two languages. In C, a function with a
const qualified 2D matrix can not easily be called with an argument that is not
const qualified. Avoid that in shared interfaces.
const qualified pointer targets are fine, though.
restrict qualification and aliasing
Aliasing rules are different in C and C++, and they must be because of C++’s reference concept. Therefore you must be really careful in what properties you assume for pointers. C has the possibility to specify with
restrict that the object behind a pointer can only be accessed through that pointer alone. This is a powerful contract, and places an important constraint on the caller of a function.
C++ has no equivalent of this. It is often advocated that for C++ one should just define
restrict as a macro that is replaced by nothing. That misses the point to educate the caller of the function to be careful to pass distinct arguments to all the parameters.
Avoid this feature if you can for cross-language interfaces. If you can’t, document the fact that arguments must be distinct thoroughly.
inline functions
Inline has slightly differing semantics between C and C++, but that can usually be mended with some care. The difference lies in the instantiation of such a function, in case the linker needs a copy of it. C++ takes magically care of that, but for C, it is up to the programmer to provide exactly one such an instantiation in some of the object files that are linked together. So we have to ensue that the library that implements our interface provides such an instantiation.
But, inline functions are first of all functions, so all that is said above about the slippery slope of language differences applies. If you use this feature, keep the functions to a minimum and delegate most of the implementation of the underlying feature to one of the languages.
Variadic functions
Variadic functions are complicated beasts, they have complicated rules for argument conversion (promotion) and they have no inherent mechanism that would help to know the number of arguments that a call has received. Creating new interfaces with that functionality should be avoided. That is, their use should be restricted to the few well established functions of the C library such as
printf or
scanf.
Both languages have features that can replace these, most of the time. Obviously, C++’s variadic templates are out of the scope for a common interface with C. For the common use case of a list of arguments that all have the same
const qualified type, a temporary array parameter can be used through an intermediate macro call, see below.
Type generic interfaces
C and C++ have diametrically opposed strategies to implement type generic function interfaces. C++ has function overloading and default arguments, C has
_Generic primary expressions and uses macros to implement the latter. C’s mechanism is not easily extendable, that is you can only glue together functions for a known list of types. If you want to support a new type, you’d have to change your
_Generic expression or macro.
There is not much common ground, here, so if you want to provide such features for both languages, you would have to implement them differently for both languages.
A simple mechanism would be to create the different functions for the type list in C and use a suitable naming convention, say something like
hu_flt and
hu_dbl. Where the C code would use a macro
#define hu(X) \ _Generic((X), \ float: hu_flt, \ double: hu_dbl) \ (X)
C++ would just interface this with some
template specializations:
template< typename T > inline auto hu(T x); template<> inline auto hu< float >(float x) { return hu_flt(x); } template<> inline auto hu< double >(double x) { return hu_dbl(x); }
Both should result in equally efficient executable code, namely in the compiled program calls to
hu should be directly replaced by the call to the corresponding C function.
If your type generic interface provides compile time constants such as for C with
#define needed(X) \ _Generic((X), \ float: 37, \ double: 51)
you can use a similar mechanism for C++ that uses
constexpr instead of
inline:
template< typename T > constexpr auto needed(T x); template<> constexpr auto needed< float >(float x) { return 37; } template<> constexpr auto needed< double >(double x) { return 51; }
Macros
The preprocessor for C and C++ should nowadays produce equivalent text replacements. The intent of both standard committees, C and C++, is to keep them completely in sync. We already have seen some examples above where the preprocessor comes to our rescue when declaring common interfaces.
In particular, relatively recently C++’s preprocessor also has been equipped with
variadic macros. These are macros that can receive a varying number of arguments. To conclude, let’s look into a more sophisticated example that provides an variadic interface, but with just the same type for all parameters. Suppose we have a simple function that takes an array of
double:
size_t median_vec(size_t len, double arr[]);
The idea is that we want to provide a similar interface for both languages that just takes a fixed list of values, constructs an array and passes it to the function.
# define ASIZE(...) /* some macro magic that determines the length of the argument list */ #ifndef __cplusplus # define ARRAY(T, ...) ((T const[]){ __VA_ARGS__ }) // compound literal #else # define ARRAY(T, ...) (std::initializer_list< T >({ __VA_ARGS__ }).begin()) // standard initializer array #endif #define median(...) median_vec(ASIZE(__VA_ARGS__), ARRAY(double, __VA_ARGS__)) ... size_t med = median(0, 7, a, 33, b, c);
Thank you for a really interesting and concrete post!
“It makes no sense to compile C code with a C++ compiler, and you should look with suspicion at any code or programmer that claims to do so.” – I can see that sentence getting explosive in some contexts :D. And I can also see already the counter “of course you have to be careful, but it can be done and it’s actually Good™”. So it would be great if you could provide some concrete example of how things can go badly that way.
Comment by hmijail — August 8, 2017 @ 09:27
I am happy that you like it.
Honestly, the post is already more complicated and longer than I initially thought 🙂
And in fact it already has some examples for the “careful” part, so I think I will not add on that.
Comment by Jens Gustedt — August 8, 2017 @ 09:39
you can’t pass pointer to const double to median_vec in c++
Comment by Serge Pavlovsky — February 15, 2018 @ 18:06
|
https://gustedt.wordpress.com/2017/08/08/cross-language-interfaces-between-c-and-c/
|
CC-MAIN-2018-17
|
refinedweb
| 2,583
| 58.72
|
89 Posts Tagged 'Ruby'
Ruby singleton classes
I was writing some Ruby and I must've had my head stuck in Java, because I wrote something like this:
class Test1 @x = 1 end
That is very different from this:
class Test2 def initialize @x = 1 end end
In the latter case I made a normal instance variable which any object that is_a? Test2 can access. In the former case I made an instance variable of Test1 itself. Classes in Ruby are Objects like any other; I gave the Test1 object itself an instance variable. So:
Test1.new.instance_variables # => [] Test2.new.instance_variables # => ["@x"]
Chapter 24 of the Pickaxe book talks about singleton classes. You can edit objects (add methods and instance variables etc.) on a per-object basis rather than a per-class basis. You can do this because Ruby makes a "virtual" singleton class, makes that singleton class the direct class of your object, and makes your object's original class the superclass of this new singleton class. That way the object's original class is untouched and your meddling doesn't affect any other objects of the same class.
In Test1 above this is what I did. Ruby made a new singleton class, which the Pickaxe calls
Test1, and made an instance variable
@x in it. That instance variable is then accessible as a part of the Test1 object itself. If it were theoretically possible to instantiate another object of type Test1', it would presumably have a
@x too, but this isn't possible.
So say I want to access this
@x that belongs to Test1. The Pickaxe tells you how. In general, if you have an object and you want to access an instance variable of it, you define a "getter" method in your object's class that returns the variable's value. You can use the shortcut
attr_reader to do this.
So if you want to access an instance variable of Test1 itself, you need to define a method in Test1's class (i.e. the singleton class, Test1'). The Pickaxe says to do this:
class Test1 class << self attr_reader :x end end Test1.x # =>; 1
It makes perfect sense that you can also do it this way:
class << Test1 attr_reader :x end Test1.x # => 1
The Pickaxe gives a couple reasons why you might want to do this kind of thing. But I'm going to go ahead and label it "black magic" and stay away from it, I think.
Dile.
Programming.
Vim regexes
One thing I very much miss in Gentoo is controlling what is compiled into my Vim. You need to enable
perldo and
rubydo support at compiletime. Gentoo had USE flags to do it. In Ubuntu I get perldo but no rubydo by default, which is annoying.
The reason I need
perldo/
rubydo is because Vim's regexes are so inconsistent.
* is special when not escaped, but
+ is special when escaped. You can use
\{x,y} (escape only opening bracket), but you have to use
\( \) (escape both parens), and with
[] it's special when you don't escape either. I simply can't remember these, especially when coding at full speed, and speed is one of the reasons to use Vim in the first place.
Then Vim has
magic and
nomagic. And you can use
\v to set "very magic". Very magic is almost what I want, but you can't set it in your
.vimrc and even if you could, the Vim manual tells you to leave the setting of
magic alone if you know what's good for you.
PCREs are so much more consistent and easier to remember (excepting Perlisms like "dot matches newline" inconsistencies). The special characters are always special, and you escape them all to make them non-special. But perldo and rubydo in Vim can't do everything Vim regexes can do; they can't properly span lines, is the major thing. They don't highlight text like Vim does with its builtin regexes if you have
hls set.
I read somewhere that Vim regexes are set up to let you match C code easily, and that's why for example
{} are non-special by default. I don't remember where I read it or if it's true. Doesn't help a lot when writing non-C code though.
Taking screenshots of a single window
I am running a game and I need to take many screenshots of the game window. There are lots of Linux tools that take screenshots: scrot, import (part of ImageMagick), gnome-screenshot, ksnapshot, and the GIMP does it too.
It turns out none of them does EXACTLY what I want, on its own. My requirements are:
- Take a snapshot via a single configurable keystroke.
- Save it in a directory of my choosing.
- Take a snapshot of a SINGLE window. And crop off the window borders. I need to take way too many snapshots to have time to go around editing them afterwards.
- Look at what files already exist in the directory I pick, and give the new file a filename that is next in ascending order after the files that already exist. It should pad the filename out to 5 digits.
- Do all of this non-interactively. It shouldn't ask me to confirm.
The way I finally ended up doing this is using import; I wrote a script to use it, and I assigned that script to a keystroke in my window manager. And once again, it's Ruby to the rescue:
#!/usr/bin/ruby Dir.chdir('/SOME/DIRECTORY') do begin num = sprintf '%05d', Dir.glob('*').select{ |x| x.match(/^\d+\.png/) }.sort[-1].match(/^(\d+)/)[1].to_i + 1 rescue Exception => e puts e num = '00001' end raise "How'd that happen?" if File.exist? "#{num}.png" `import -window 'NameOfWindow' #{num}.png` end
That this kind of thing is possible is why I love Linux. I can do so much more. I can have it save in multiple file formats. I can have it generate thumbnails as it saves new snapshots. (It's so easy to generate all the thumbnails later using
convert that I'm not going to bother.) I could timestamp the filenames rather than using incrementing integers. (There is a race condition in this script that would be fixed if I did this, but I don't care enough to do it that way.)
Mmmm rubygems
I put together my first ruby gem today (for internal use at work). Here is a good tutorial on how to do it. Or else look at chapter 17 in the Pickaxe book. Simply make a gemspec file, define a few attributes listing and describing your files and then compile it. It was astoundingly easy and such a clean way to handle versioning and distributing Ruby code. I always had some weird impression that rubygems was clunkier than Perl's CPAN, but I think I had it wrong. Not at all hard to use.
Setting the RUBYOPT environment variable to 'rubygems' lets you get away with doing a
require on gem-provided modules without doing
require 'rubygems' in every program you write. I'm sure I must've set RUBYOPT myself a year ago on this machine and forgotten. I was sort of wondering one day, hey, how is Ruby finding those modules? This kind of thing is good to know when you need to set up a Ruby environment on someone else's computer in the near future, I imagine. This is one bad thing about having a computer that doesn't need to be reformatted every six months. Who knows how many other tweaks I've done on this machine and forgot about and now silently depend on.
ncftp, wget
ncftp simply doesn't like to do recursive GETs. It'll churn along happily until it hits some random file and then timeout. If I restart the same command it'll sometimes get further and timeout, sometimes timeout sooner. Oh how I wish I could find a linux FTP client that doesn't suck half the time. I still entertain notions of writing my own someday. I know pretty well how I want the UI to work, it's the backend that I always get hung up on. I'm not going to re-implement the FTP protocol. Ruby has a nice FTP library but it seems somewhat limited in certain ways. Or else I haven't delved into it enough.
Either way. I'm going to use
wget for my recursive FTP downloads from now on. Probably should've done that to begin with.
wget -r -x --ftp-user USERNAME --ftp-password PASSWORD '*'
I at first was upset about having my FTP password lying around on the command line / process list but then I realized that I'm the only person who uses now and who will ever use this computer. I'll delete it from
~/.bash_history after I'm done. If someone is
psing my process list right now, I have much bigger problems than having them gank my FTP password.
Ow,".)
Unique-ify lines in Vim
If I want to get rid of duplicate lines in a text file, in Linux I can simply pipe it through
uniq. Windows doesn't have an equivalent (or else I don't know about it). There may be a Vim equivalent also, but I don't know it if there is. Instead you can use
perldo to do it like this:
:perldo BEGIN{%x={}} $_ = ' ' if $x{$_}++
The neat thing about
perldo (and
rubydo) is that whatever you do is persistent across runs of
perldo. That means if you run this:
:perldo $_ = ' ' if $x{$_}++
it'll work the first time you run it, but the second time,
$x will have retained its values from the first run, and it will instead delete every line. There are few limits to what you can do with a one-liner
perldo command in Vim. It's great, if you know Perl better than you know Vim.
EDIT TWO YEARS LATER: How about
:sort u.
Perl vs. Ruby breaking from nested loops
Breaking out of nested blocks is apparently done much differently in Perl and Ruby. In Perl if you run this:
my $counter = 0; foreach my $outer (1..5) { foreach my $inner ('A'..'E') { print "$outer: $inner\n"; last if ++$counter > 4 } }
The
last is going to break out of the inner-most foreach loop by default. If you want to break out of the OUTER foreach loop you can use line labels:
my $counter = 0; OUTER: foreach my $outer (1..5) { foreach my $inner ('A'..'E') { print "$outer: $inner\n"; last OUTER if ++$counter > 4 } }
In Ruby to do the same thing, you have to use a
throw/catch block. This is a completely separate mechanism from Ruby's
begin/rescue Exception-handling mechanism.
throw/catch rather appears to be simply a flow-control mechanism. You
raise Exceptions, but you
throw Strings or Symbols. The equivalent of the above Perl in Ruby:
counter = 0 catch :OUTER do 1.upto(5) do |outer| 'A'.upto'E' do |inner| puts "#{outer}: #{inner}" throw :OUTER if (counter += 1) > 4 end end end
EDIT: Originally I thought that Perl's
last LABEL; would not find labels except in the same static scope. It must've been a bug in my test code however, because this does work. My mistake! So this does work in Perl:
sub test { HERE: foreach my $really_outer (1..5) { test2(); } } sub test2 { my $counter = 0; foreach my $outer (1..5) { foreach my $inner ('A'..'E') { print "$outer: $inner\n"; last HERE; } } } test
You can do this in Ruby also. The other nice thing about Ruby is you can return a value from the throw:
def test result = catch :HERE do 1.upto(5) do |really_outer| test2 end end puts "RESULT: #{result}" end def test2 counter = 0 1.upto(5) do |outer| 'A'.upto'E' do |inner| puts "#{outer}: #{inner}" throw :HERE, "YOINK" if (counter += 1) > 4 end end end test
In Perl you also have the option of using Error.pm, but it looks horrendously hackish to me, as Perl code often does.
The only thing I can think of that's bad about Ruby is that it has two things which are really similar, one for Exceptions and one not. People coming from a different language may think that
throw/catch is the Exception-handling mechanism when it's not. But Ruby still wins here.
|
http://briancarper.net/tag/95/ruby?p=7
|
CC-MAIN-2014-15
|
refinedweb
| 2,088
| 72.97
|
Hello! I need a little more help. I have managed to convert all my chars input from a text file into digits.
Example:
Input from file:
$1,9,56#%34,9
!4.23#$4,983
Output:
1956
349
423
4983
Now, I need to take those individual digits the 1 9 5 6 and make it read as a whole number. The output would look the same but they would actually be whole numbers. Make sense? I have to do this in my outer loop. It also has to be an EOF loop. So, I know I need to take the first digit and multiply it by 10 and add the next digit then multiply all that by 10 until I reach the last number. How can I write that in an efficient non-crashing way?
I attached my input .txt file if you need to see it.
This is what I have so far...
THANK YOU SOOOO MUCH!!!
/* */ //Character Processing Algorithm #include <fstream> #include <iostream> #include <cctype> using namespace std; char const nwln = '\n'; int main () { ifstream data; ofstream out; char ch; char lastch; int sum; data.open ("lincoln.txt"); //file for input if (!data) { cout << "Error!!! Failure to Open lincoln.txt" << endl; system ("pause"); return 1; } out.open ("out.txt"); //file for output if (!out) { cout << "Error!!! Failure to Open out.txt" << endl; system ("pause"); return 1; } data.get (ch); // priming read for end-of-file loop while (data) { sum = 0; while ((ch != nwln) && data) { if (isdigit(ch)) out<<ch; if (ch == '#') out<<endl; { ; } lastch = ch; data.get (ch); // update for inner loop } // inner loop if (lastch != '#') out<<endl; data.get (ch); // update for outer loop } //outer loop cout << "The End..." << endl; data.close (); out.close (); system ("pause"); return 0; } //main
|
https://www.daniweb.com/programming/software-development/threads/346595/sum-individual-digits-into-one-whole-number
|
CC-MAIN-2017-17
|
refinedweb
| 293
| 78.35
|
Re: Reading a file.
- From: r_z_aret@xxxxxxxxxxxx
- Date: Sat, 28 Jun 2008 11:07:51 -0400
On Fri, 27 Jun 2008 11:07:01 -0700, ashishedn
<ashishedn@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
Problem is when i am reading a file through "readfile" function,it is being
read in character format.
Variation on earlier replies:
ReadFile reads bytes, and reads them exactly as it finds them, with
_no_ translation. How your program interprets those bytes depends on
how it stores them in memory. So if you read the bytes into a char
(ASCII) array, your program will interpret them as ASCII. But if you
read them into an int array, your program will interpret the exact
same bytes as integers.
Suppose a value "9" is read,then it would get stored
in memory as "0x39" and i would be getting "0011 1001" instead of "0000 1001".
so the question is whether with "readfile" function a value could be read in
decimal format?
"Bruce Eitman [eMVP]" wrote:
You may have said something before, just not clear enough for the rest of to
understand what your problem is, and you still haven't done a very good job
of it.
Could it be that you saved the file with something like "%X", data? And so
now you need to convert it?
As I stated before: Then explain better what you expect and what is
actually happening.
--
Bruce Eitman (eMVP)
Senior Engineer
Bruce.Eitman AT EuroTech DOT com
My BLOG
EuroTech Inc.
"ashishedn" <ashishedn@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message
news:50323539-6D7B-45E0-B096-7DAB20E80E65@xxxxxxxxxxxxxxxx
That i have already tried.what is happening,if i fill data to the buffer
as
integers then it is showing correct value.However,if data filled to the
buffer is character values, then in memory ASCII values are displayed.
But,for our main application, in both the cases(defining buffer as set of
integers/characters) ASCII values are coming.Therefore,i can't get correct
bitwise information as told earlier.
"Bruce Eitman [eMVP]" wrote:
Huh?
What is the problem? Reduce this a little, I suspect that it has nothing
to
do with "reading a file".
What happens if you take a array and fill it with hard coded data, then
do
the operation? Doing so might help you understand what is going on, and
certainly would help me understand.
Then explain better what you expect and what is actually happening.
--
Bruce Eitman (eMVP)
Senior Engineer
Bruce.Eitman AT EuroTech DOT com
My BLOG
EuroTech Inc.
"ashishedn" <ashishedn@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message
news:C59D4C87-F2BA-405A-AC4A-2FE26532C29D@xxxxxxxxxxxxxxxx
Hi,
I am trying to read data from a file to a buffer.Then, i want to store
bitwise information of a particular byte to a different array.
Problem in doing so is, in memory, ASCII value of a character is
getting
stored and right shifting that value would't give the correct bitwise
information.
Pl. help.
Code written for that is.
#include "stdafx.h"
bool hw[4][8][8];
int WINAPI WinMain( HINSTANCE hInstance,
HINSTANCE hPrevInstance,
LPTSTR lpCmdLine,
int nCmdShow)
{
unsigned char inBuffer[256];
unsigned char i,j,k;
DWORD noofbytes = 0x00000001,nBytesRead;
HANDLE hservice = CreateFile(L"\\hi.txt",GENERIC_READ,0,NULL,
OPEN_EXISTING,0,NULL);
DWORD d = GetLastError();
if(hservice != INVALID_HANDLE_VALUE)
{
DWORD dwPtr = SetFilePointer(hservice,0x00,NULL,FILE_CURRENT);
if(dwPtr == 0xFFFFFFFF)
{
printf("Pointer set error = %ul\n",GetLastError());
// return 0;
}
else
{
BOOL bResult =
ReadFile(hservice,&(inBuffer[i]),noofbytes,&nBytesRead,NULL);
DWORD d = GetLastError();
if(bResult && (nBytesRead == 0x01))
printf("Read Value = %x\n",inBuffer[i]);
else
printf("Error in Reading = %ul\n",d);
}
}
else
printf("Error in opening the file = %ul",d);
for(k=0;k<8;k++)
{
// printf("value of in Buffer = %d\n",inBuffer[j]);
if(inBuffer[0]>>k & 0x01)
hw[0][0][k] = TRUE;
else
hw[0][0][k] = FALSE;
printf("Value of hw[0][0][%d] = %d\n",k,hw[0][0][k]);
}
return 0;
}
-----------------------------------------
To reply to me, remove the underscores (_) from my email address (and please indicate which newsgroup and message).
Robert E. Zaret, eMVP
PenFact, Inc.
20 Park Plaza, Suite 478
Boston, MA 02116
.
- References:
- Reading a file.
- From: ashishedn
- Re: Reading a file.
- From: Bruce Eitman [eMVP]
- Re: Reading a file.
- From: ashishedn
- Re: Reading a file.
- From: Bruce Eitman [eMVP]
- Re: Reading a file.
- From: ashishedn
- Prev by Date: Re: Reading a file.
- Next by Date: Re: Launching the Wireless Information dialog programatically
- Previous by thread: Re: Reading a file.
- Next by thread: Re: Reading a file.
- Index(es):
|
http://www.tech-archive.net/Archive/WindowsCE/microsoft.public.windowsce.app.development/2008-06/msg00091.html
|
crawl-002
|
refinedweb
| 746
| 65.22
|
>>."
Not a Standard. (Score:5, Insightful)
If you never finalize it's not a standard. This sounds like a Microsoft move to me.
Re:Not a Standard. (Score:4, Informative)
The real reason is because the committee which desides whats in the standard couldn't get a consensus, so HTML 5 took forever and its still not a W3C recormendation.
Look at
HTML 4.01 became a W3C Recommendation 24. December 1999
XHTML 1.0 became a W3C Recommendation 20. January 2000
On January 22nd, 2008, W3C published a working draft for HTML 5.
So 10 years after xhtml 1.0 we still dont have a W3C recormendation for HTML 5, if HTML 6 carried on business as usual then we would probably be looking at 2025 for it...
Re: (Score:3, Interesting)
You missed all those XHTML 1.1 Modules. That's how W3C wasted all its time.
Original HTML5 draft came from WHATWG, not W3C.
The standards are too complex (Score:4, Insightful)
HTML 2/3/4, XHTML/1, and CSS/1 were all small, simple, understandable standards. Then the web got popular - in part because web standards and technology were so simple. Once the web had exploded, every damn company wanted to stick its oar in. CSS 2 took years, is overly complicated, but still just barely manageable. Look at CSS 3 - everybody's special wishes are in there - the thing is immensely complex and as a standard, frankly, it is therefore nearly useless. HTML 5 is much the same - too many special wishes and fancy features. One needs to take a weed-whacker to it and to CSS, to restore some degree of simplicity.
Think of it this way: why is there a competition to see how well browsers score on the ACID tests? The standards ought to be simple enough that any decent browser scores 100%. The fact that this is not the case is proof that the standards are far, far too complex.
Re: (Score:2)
Sure it is. The standard is what is currently on their page. If you are using deprecated methods, you are non-standard.
You could be standard compliant one day and be out of compliance the next.
Maybe it's better if I put it this way. There's a new standard every day. It's just the nobody puts a version on it.
Re: (Score:3)
That will truly help the industry, when contracts calling for levels of compliance become impossible and designers can never get paid because their work is never compliant.
Moving Targets (Score:3)
Re: (Score:3)
never mind that the standards are developed in cooperation between the browser vendors, and at least two vendors must implement something before it is viable for standardization.
That was the old way of doing it, when Microsoft and Netscape kept adding more and more features in the hope that it would make it into the standard. Think tables, frames, blink - oh wait: thank god it didn't work for blink!
These days it is frowned on to add features that are outside the standards, with a few exceptions like styles with prepended browser identifiers and things hacked into meta tags (like iPhone's ones to control the layout on the small screen which I think should have been a custom style).
H
no more numbers! (Score:5, Funny)
Re: (Score:2)
it just seems appropriate (Score:5, Funny)
GET!
Re: (Score:3)
REST!
Re: (Score:2)
SOAP!
Re: (Score:3)
terrible idea (Score:5, Insightful)
You'll get pages that becomes invalid with time despite they were valid before. That sounds like a very stupid idea.
Until you name the revision by dates, which is basically the same thing as giving version numbers...:5, Funny)
They'll start naming them the way sequels get named.!
Version numbers not related to issue (Score:5, Insightful)
You'll get pages that becomes invalid with time despite they were valid before.
That is a result of backward-incompatible changes, not the absence of version numbers.
Re:Version numbers not related to issue (Score:5, Insightful)
You'll get pages that becomes invalid with time despite they were valid before.
That is a result of backward-incompatible changes, not the absence of version numbers.
Quite true, but what I think the poster was saying is that without version numbers it would be impossible to claim they were "standards" compliant at any one time. So even if you wrote very good code that was compatible across 99% of all browsers out there, a few years go by and you look like lazy morons that just don't care..
Next thing you know the browsers will go versionless too and then at that point all you can do is drink heavily.
Re: (Score:3)
So even if you wrote very good code that was compatible across 99% of all browsers out there, a few years go by and you look like lazy morons that just don't care.
That doesn't happen unless the standard is accepting backwards-incompatible changes to widely-established features, which they've committed not to..
Which is what you have to do with features in the real world anyway.
Re: (Score:3)
Provided you only use "widely-established" features. Which ones are those, specifically? Because they certainly have not made any commitment to reject backwards-incompatible features in general. Quite the opposite: they make it very clear that if they decide something is "broken", they will change it without warning. Hope you weren't relying on that "broken"
Re: (Score:2)
Numbers are so square. Introducing HTML version "Fred"!
Re: (Score:3)
An upcoming revision will make HTML Multi-Freded.
Re: (Score:2)
This is the same thing that happens with Operating Systems' need for versioning. What does the board plan, "reflexion" API's where each dev must ask the browser what it can do and assume the missing features in the ethereal standard are enough for the page to render 3 years from now? 10 years from now?
Just like an OS, the standard can drop features at any time; the point of numbers is to tell the dev from an easy test what stylesheets to junk and what error messages to give.
Re: (Score:2)
Validity rot isn't the issue here, it's that compliance is now a constantly moving target.
Browser vendors (some more than others) have always had a tough enough time getting their products to comply when the spec was finished and became static. With a dynamic spec chaos will surely ensue and developers will pay the price, because the lowest common denominator won't be as much of a known value as it is now.
Also, didn't Daniel Glazman lament a few years ago about how one of CSS's biggest flaws was the lack o
Thanks google (Score:5, Funny)
Re:Thanks google (Score:5, Funny)
That's The Google Way, eternal beta.
Re:Thanks google (Score:4, Insightful)
Re: (Score:2)
I know. At this point I would seriously consider junking the whole steaming pile and writing my own. Not that it would ever be used by anyone. The steaming pile has already reached critical mass.
Without versions... (Score:5, Insightful)
...I can always render the latest HTML in Netscape Navigator. Right?
Um... (Score:5, Insightful)
People will still need to differentiate between implementations of HTML that have different features...do they expect us all to just use the latest and hope nothing breaks?!
Re:Um... (Score:5, Insightful)
Yeah, I look forward to the "this site is compliant with some of HTML standards and not others because they're too new. We can't really define that for you because there is no version, so best of luck to you" badges.
Re: (Score:2)
Whee!
Time to break out those, "This page best viewed in..." badges again!
Re:Um... (Score:5, Insightful)
It's still ok. I'll mail him a money order. Unfortunately it's for a higher amount, but he can just deposit it then send me the difference.
All will be well.
Re: (Score:2)
And this is a classic example of having two replies open and responding in the wrong textbox.
I either need way more or way less Monster right now.
Re: (Score:2)
It is possible that individual features of HTML will have versions instead of the entire standard. Maybe each tag will have a version?
:P
Re: (Score:2)
Or as time moves on, various tags, or sub-features of a tag, gets defined as deprecated.
As long as the basic definition of a tag, and its layout have some kind of default (hello, xml) then this can be done.
Re: (Score:2)
People will still need to differentiate between implementations of HTML that have different features...
This presumes that future versions of HTML from the point of adoption forward have backward-incompatible changes. The solution to that is to minimize backward-incompatible changes.
Slow Browsers (Score:4, Insightful)
Wow, so now my browser has to interrogate every single element on a page to determine what's supported BEFORE going to plugins etc.
Yikes...
Re: (Score:2)
As a practical matter, that's pretty much what you have to do now. Modernizr and the like make it easier for you, but ultimately that's what they're doing. You can't really trust what each browser claims to support anyway.
Re: (Score:2)
Just have the browser "not render" the element that's not supported anymore. That's pretty much what they do now.
Translation (Score:5, Insightful)
Microsoft got tired of people asking when they were going to fully support HTML 4....
Now everyone will be able to say "We support HTML" even though nobody fully supports all aspects of the spec. Just like today, only nobody will be able to point their finger at any sort of milestone that they missed, so companies that drag their heels in standards compliance end up looking better.
How is this a benefit again? It seems to me that we need smaller, more frequent milestones, not elimination of those milestones.
Re: (Score:2)
Re: (Score:3)
Now everyone will be able to say "We support HTML" even though nobody fully supports all aspects of the spec. Just like today,
...
Yeah, and the major benefit will be that developers of web sites will no longer waste their time trying to figure out what numbered version to declare in their DOCTYPE line, and pointing their fingers accusingly at browsers that don't support exactly that standard. They'll go right to testing against a flock of more-or-less current browsers (plus IE6
;-), and making sure their HTML works somewhat sensibly in all of them.
Fact is, it has never worked very well to study any particular HTML standard and code s
Re:Translation (Score:5, Funny)
Then again, that kind of system wouldn't be rational.
Living Standard? (Score:5, Insightful)
So, in the future it's impossible to figure out what browser supports what? Because, after all, browser support is dragging behind years even now. Or is that the very goal of Google? Make Chrome the de facto standard, and force everyone else to play the catch-up game?
Seriously, don't do this "living standard" crap. At the very least use minor version numbers to identify a given set of standards. Don't force me to guestimate how a web page I write today is going to behave in browsers 5 years from now; let me specify what behaviour I want.
Re: (Score:2)
So, in the future it's impossible to figure out what browser supports what?
Why would you expect the future to be different from the past?
Re: (Score:3)
Instead of doing this stupid thing a mechanism similar to OpenGL extensions would make far more sense. It has worked well for OpenGL, allowing vendors to innovate while providing application programmers a clear definition of what facilities are/are not available on a given platform. Yes, it requires the application programmer to implement fallbacks in some cases but this is not worse than the html situation today.
It is really, really important to be able to evaluate objectively the degree to which a brows
Problem (Score:5, Insightful)
There will be no way to pressure browser developers to be compliant with "NGHTML 4.7" if we can't even talk about it because it lacks a name. It'll also be hard to enumerate features of releases, to decide what version of the standard we're talking about and have programmatic support for that, etc.
This eliminates most of the benefits of having standards to begin with.
Re: (Score:2)
"Don't you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it." [george-orwell.org]
Fun with Names (Score:2)
if we can't even talk about it because it lacks a name.
Hey, we've seen this before - no numbers, but we can have HTML Pro, HTML Extreme, HTML on Acid, HTML JC, etc.
Seriously though, if there is a written standard, no matter what they don't call it, people will label it. HTML 2012, or whatever, will be what was in effect as of January 1st 2012.
Maybe what they're trying to do here isn't to keep browser writers from having excuses for not keeping up, but for keeping the standards body from feeling like if: (Score:3)
We'll have exactly what we have now. Browser vendors were already adding draft features into their product before the specification was finalized. Just look at how many browsers support HTML5 features, even though HTML5 does not exist as a standard yet.
Re: (Score:3)
I was going to mod, but I cannot let this pass. Browsers have always added features that's true. And in some ways it is required, you cannot make progress by mindlessly adding to the standard, you will have to try out new features first.
But any *mainstream* site should be able to choose a HTML version to support, maybe taking into account some features that are badly supported by the browsers. This way you can be reasonably sure that most browsers (and with the internet appliances market soaring, there are
Re: (Score:2)
we'll have a big vector of flags
No. There will be a
/htmlcaps.xml in the root of every website, enumerating the features necessary to render it.
Privacy dies in this move (Score:2)
I think the "vector of flags" idea has merit, but it introduces worse issues than those it solves. Consider privacy and user-tracking issues; this vector would make it trivial to uniquely identify users because it contains that much more information (see also the EFF's Panopticlick [eff.org]).
We still need "milestones" which can be marked, even if they are years, quarters, or months instead of versions. In this manner, we can still determine compatibility without introducing millions of different combinations of
Linked blog article is fluff with no insight (Score:5, Informative)
That link gives it away ... (Score:2)
The subheading on that link seems particularly appropriate:
> Please leave your sense of logic at the door, thanks!
Sigh.
Re:Linked blog article is fluff with no insight (Score:5, Informative)
Indeed, and as that article points out, this change in naming applies ONLY to what the WhatWG was calling "HTML5", not to be confused with what W3C calls "HTML 5." For anyone that's been following this, or has read Zeldman's HTML5 book, knows, "HTML5" and "HTML 5" can refer to entirely different sets of standards.
The W3C, as far as I can tell, is still taking "snapshots" of WhatWG's "HTML" spec and numbering them, and the W3C is still the primary authority when it comes to official web specifications.
This change really isn't as big of a deal as people here seem to think, and the original article does confuse the issue.
Re:Linked blog article is fluff with no insight (Score:4, Insightful)
Just like Chrome? (Score:4, Insightful)
Do they mean the browser Chrome? As in Google Chrome 8.0.552.237?
Is 8.0.552.237 not the version?
Re: (Score:2, Funny)
Re:Just like Chrome? (Score:5, Funny)
Re: (Score:2)
It is for n00bz like you, I'm running 9.0.597.47 beta.
What a n00b, I'm running 10.0.634.0 dev
I can finally paste into Slashdot comments. The future is wonderful.
Internet/server backed "Apps" are the web 3.0 (Score:2)
And there is no need to further develop a bi-plane when you fly a jet already.
Re: (Score:2)
Really? That's weird because I thought browsers were waring over who could get HTML 5 features out first, who had the most, and showing them off.
But I might be totally crazy.
Re: (Score:2)
...The various Javascript APIs, web sockets, IndexedDB, SVG, etc, are not HTML 5.
Re: (Score:3)
From what I can tell, they're opting to keep the aspect of HTML which is more or less the most broken under the justification that they've always done it that way, regardless of the fact that HTML5 was mostly supposed to be changing that. A new set of standards that both modernized and theoretically was implemented by all the browsers.
It's more or less inevitable
Re:Internet/server backed "Apps" are the web 3.0 (Score:4, Funny)
And what is this jet you speak of? Javascript, CSS, DOM? If these are jets, then someone put the turbines in backwards, pasted the wings on with glue sticks, and is using banana peels as fuel.
Re: (Score:2): (Score:2)
Since when did Google become the keepers of the HTML spec?
Since Ian Hickson moved to Google?
Re: (Score:2)
Re: (Score:2)
Browsers were already adding support for draft features as they happened. I believe his point is that there is no use in waiting until the spec is finalized; add the features they become available and let people start using them now.
Re:Huh? (Score:4, Informative)
Since when did Google become the keepers of the HTML spec?
Google is not "the keepers of the HTML spec". Ian Hickson, who happens to work for Google, is the editor of the HTML5 spec. Usually, spec maintainers work for a firm involved in the area the spec addresses.
I think a randomly changing feature-set sounds like a bad idea.
In none of the discussion of this change has there been any indication that the WHATWG process for HTML will involve random changes.
HTML is supposed to be a standard, not something which just changes without any real control behind that.
There is a process, which is discussed in the WHATWG FAQ [whatwg.org]. The process just doesn't involve version numbers anymore.
Re: (Score:3)
Version numbers? Where we are going, we don't *need* version numbers!
Have to distinguish somehow! (Score:2)
Re: (Score:2)
Their justification FAQ: (Score:5, Informative)
Their justifications for the decision are here:
I can't up-moderate, so i'll just say it (Score:2)
+1 to everyone who thinks this is stupid. In particular given how significant HTML is to the web-as-we-know-it surely there must've been some consultation before making a call like this? With a cacaphony of "NO" coming through here (and very little, if any, support) one has to wonder....
Re: (Score:2)
If this is Hickson's idea, it certainly tops all the other terrible or poorly reasoned ideas he's come up with.
Bad engineering (Score:3, Insightful)
Re: (Score:2)
A constantly evolving standard is bad news for everybody
The standard was, in fact, constantly evolving anyway, and all the browser vendors (and other heavily interested parties) were engaged in the process and doing pretty much exactly what they would do under a version-number-less process.
The only difference is that people that weren't deeply involved were dealing with snapshots that the major players were treating as outdated.?
They'll use model-years (Score:2)
Say hello to HTML 2011.
Re:Er, Why use Version Numbers At All? (Score:5, Informative)
It doesn't, it's a fucking disaster. I'll give a concrete example. I used HTML 5 audio on a site with a Flash fallback for browsers that didn't support it. All is good and well. One day, I start getting complaints that the audio is broken. Turns out that a) the HTML 5 spec had changed and b) Firefox had changed to match in a minor point release. Firefox 3.51 worked, Firefox 3.5.2 didn't, as I recall. The new API was indistinguishable from the old API in as much as all the same objects and functions were there, but a return value had changed. So, even with the best practice method of feature detection, anybody writing to the old API was screwed.
So I fixed it up by removing the HTML 5 audio and made the decision to wait until HTML 5 was published in its final form. Something that I should have done to begin with really, it's madness to use HTML 5 at the moment as it's just not finished yet. You don't know what is going to change.
And now they want to do away with a "final" version altogether? Gee thanks, guys! How am I going to be able to trust it to be stable enough to rely on ever again? What's going to stop the same thing from happening over and over again?
Strange (Score:2)
Funny distribution of approval on the linked blog's comments there (I hope not to have violated the rule of not RTFA I just read the comments there I swear).
First 10 comments, 2 negative. Last 10 (around n.50 as I write) 9 negative.
Shocking... (Score:2)
Color me "not surprise". Engineers, much like artists, have a hard time knowing when something is done and want to "tweak and tweak" everything to death.
Solution? Rather than *finish* something, just remove the versions! It'll be in development for perpetuity - an engineer's dream come true.
Next up! (Score:2)
HTML ME, closly followed by HTML XP
Bad interpretation (Score:5, Informative)
What was said is that the moving spec in development is now called HTML, when a snapshot is taken it will be called HTML5, next HTMLX.X.X or any other name. The WHATWG spec is not a finalized document, HTML5 will be snapshoted sometime:Forever in beta. (Score:4, Insightful)
The HTML standards committee takes eternity and a day to finalize anything.
Exactly. Ten years passed between HTML 3 and 4. Another ten have now passed from 4 to 5, and 5 is still not an official standard.
Meantime, requests for features and API tweaks flow in, and all browsers are going ahead and building them (even IE!). If you froze the spec after so many features, you would be drafting HTML 8 before HTML 5 became standard.
Second, I don't know about you, but I write web apps for a living, and I've used HTML since 1997, and never once did version numbers help me. By the time I got serious, it was HTML 4. But none of the browsers posted anything like "we are an HTML 4 browser," and if they did, they lied, especially Internet Explorer. To know what worked, you tested and read about tests other people did on each the browsers.
Finally, the term "HTML 5" has already been stretched so much to be meaningless. I'm not even talking about the journalists who use it to mean things that you could do with 4. HTML 5 is huge: Canvas, video, audio, three persistent storage APIs and one session storage API, various APIs to do with the web address, geolocation, and others. Does a browser need to call itself HTML 4 until fully implements all of these? How would that help me?
The only thing I have ever really looked for are those comparison tables with the red and green squares, like we've always done, to figure out what to use in my web page next.
Re: (Score:3)
yes, it'll matter because the back end is still HTML. And not everything that creates and renders HTML is dreamweaver, firefox or iexplorer. And while management practices do not matter, specifications and implementations DO matter. Most especially, for those that rely on accuracy. product comparisons, and compatibility.
Re: (Score:2)
Ignoring 14% of your customers is a pretty stupid move.
Re: (Score:2)
maybe there is a reason you don't get customers from those browsers.
like the page doesn't fucking work!
Re: (Score:2)
Howard Moskowitz - Spaghetti Sauce!
Re: (Score:2)
It'll matter to web browsers, which will have to spend a lot more effort trying to figure out exactly how a page is supposed to be displayed, without version numbers.
Re: (Score:2)
I wish they would create a mashup of Idiocracy and 1984.
Oh wait, I'm living in it.
Re: (Score:2)
I'm sorry. The feature you are looking for is similar to XML namespaces (though such namespaces were much more verbose). A feature found in the XHTML2 standard that was rejected by the standards body because html5 went all the way to 5.
|
http://developers.slashdot.org/story/11/01/20/206206/No-More-Version-Numbers-For-HTML/insightful-comments
|
CC-MAIN-2014-52
|
refinedweb
| 4,292
| 72.36
|
Thanks for pointing - I've republished Carousel under Ext.ux.layout namespace
Documentation will be updated maximum in 2 hours.
Has anyone used this extension with 2.2? I have a very simple layout and it presents a few problems:
1. I'm not able to resize the panels
2. When collapsing the panels, I'm not able to uncollapse them again.
Thoughts?Thoughts?Code:Ext.onReady(function(){ var extra = '<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>...'; new Ext.Viewport({ layout: 'row-fit', id: 'container', items: [ { xtype: 'panel', title: "I'll take some too...", html: "...if you don't mind. I don't have a <tt>height</tt> in the config.<br>Btw, you can resize me!"+extra, id: 'panel3', autoScroll:true, collapsible: true }, { xtype: 'panel', id: 'slider', height: 5 }, { xtype: 'panel', title: "Let me settle too", html: "Since there are two of us without <tt>height</tt> given, each will take 1/2 of the unallocated space (which is 50%), that's why my height initially is 25%."+extra, autoScroll: true, collapsible: true } ] }); var split = new Ext.SplitBar("slider", "panel3", Ext.SplitBar.VERTICAL, Ext.SplitBar.TOP); split.setAdapter(new Ext.ux.layout.RowFitLayout.SplitAdapter(split)); });
bluefox,
I am having the same issue as you with not being able to expand them panels once they have been collapsed. I can resize mine, but something is "weird" with the splitter. Hopefully, someone more familar with this code can reolve the issue with Ext 2.2.
Mike V.
I seem the same 2.2 resize issue with IE, but not firefox. Shame, it was
really good with 2.1. I will have to remove it from my app if it is not resolved.
I've made some fixes to this ux recently and code clean-up - you can try with the latest version from repository:
If the bug still persists please upload the test case to - I'll try to fix it.
Fixed bug with collapsing/expanding childs.
Latest version in repository:
Thank you! It works great now.
Regards,
Don McClean
Hi, thanks for your extension...
I'm using Ext.ux.layout.RowFitLayout.SplitAdapter but it doesn't looks like Ext.SplitBar (used in a border layout panel).
Using firebug I discovered that:
- Ext.SplitBar is a simple DIV that is sibling of its resizable DIVs and it styles itself using x-layout-split css class
- Ext.ux.layout.RowFitLayout.SplitAdapter is composed by 3 DIVs nested each other and styles itself using x-panel css class
Is there a reason for all??
thanks
|
https://www.sencha.com/forum/showthread.php?17116-Ext.ux.layout.RowFitLayout/page6
|
CC-MAIN-2018-22
|
refinedweb
| 433
| 68.67
|
world where most developers and organizations have come to accept the constant churn of new tools, architectures and paradigms, containerized applications are slowly becoming a lingua franca for developing, deploying and operating applications. In no small part, this is thanks to the popularity and adoption of tools like Docker and Kubernetes.
It was back in 2014 when Google first announced Kubernetes, an open source project aiming to automate the deployment, scaling and management of containerized applications. With its version 1.0 released back in July 2015, Kubernetes has since experienced a sharp increase in adoption and interest shared by both organizations and developers.
Refreshingly, both organizations and developers seem to share the growing interest in Kubernetes. Surveys note how interest and adoption across the enterprise continues on the rise. A the same time, the latest Stack Overflow’s developer survey rates Kubernetes as the 3rd most loved and most wanted technology (with Docker rating even better!).
But none of this information will make it any easier for newcomers!
In addition to the learning curve for developing and packaging applications using containers, Kubernetes introduces its own specific concepts for deploying and operating applications.
The aim of this Kubernetes tutorial is to guide you through the basic and most useful Kubernetes concepts that you will need as a developer. For those who want to go beyond the basics, I will include pointers for various subjects like persistence or observability. I hope you find it useful and enjoy the article!
You are not alone if you are still asking yourself What is Kubernetes?
Let’s begin the article with a high-level overview of the purpose and architecture of Kubernetes.
If you check the official documentation, you will see the following definition:
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation
In essence, this means Kubernetes is a container orchestration engine, a platform designed to host and run containers across a number of nodes.
To do so, Kubernetes abstracts the nodes where you want to host your containers as a pool of cpu/memory resources. When you want to host a container, you just declare to the Kubernetes API the details of the container (like its image and tag names and the necessary cpu/memory resources) and Kubernetes will transparently host it somewhere across the available nodes.
In order to do so, the architecture of Kubernetes is broken down into several components that track the desired state (ie, the containers that users wanted to deploy) and apply the necessary changes to the nodes in order to achieve that desired state (ie, adds/removes containers and other elements).
The nodes are typically divided in two main sets, each of them hosting different elements of the Kubernetes architecture depending on the role these nodes will play:
The following picture shows a typical cluster, composed of several control plane nodes and several worker nodes.
Figure 1, High level architecture of a Kubernetes cluster with multiple nodes
As a developer, you declare which containers you want to host by using YAML that follow the API. The simplest way to host the hello-world container from Docker would be declaring a Pod (more on this as soon as we are done with the introduction).
This is something that looks like:
apiVersion: v1
kind: Pod
metadata:
name: my-first-pod
spec:
containers:
- image: hello-world
name: hello
The first step is getting the container you want to host, described in YAML using the Kubernetes objects like Pods.
Then you will need to interact with the cluster via its API and ask it to apply the objects you just described. While there are several clients you could use, I would recommend getting used to kubectl, the official CLI tool. We will see how to get started in a minute, but an option would be saving the YAML as hello.yaml and run the command
kubectl apply -f hello.yaml
This concludes a brief and simplified overview of Kubernetes. You can read more about the Kubernetes architecture and main components in the official docs.
The best way to learn about something is by trying for yourself.
Luckily with Kubernetes, there are several options you can choose to get your own small cluster! You don’t even need to install anything on your machine. If you want to, you could just use an online learning environment like Katacoda.
The main thing you will need is a way to create a Kubernetes cluster in your local machine. While Docker for Windows/Mac comes with built-in support for Kubernetes, I recommend using minikube to setup your local environment. The addons and extra features minikube provides simplifies many of the scenarios you might want to test locally.
Minikube will create a very simple cluster with a single Virtual Machine where all the Kubernetes components will be deployed. This means you will need some container/virtualization solution installed on your machine, of which minikube supports a long list. Check the prerequisites and installation instructions for your OS in the official minikube docs.
In addition, you will also need kubectl installed on your local machine. You have various options:
Once you have both minikube and kubectl up and running in your machine, you are ready to start. Ask minikube to create your local cluster with:
$ minikube start
...
Done! kubectl is now configured to use "minikube" by default
Check that everything is configured correctly by ensuring your local kubectl can effectively communicate with the minikube cluster
$ kubectl get node
NAME STATUS ROLES AGE VERSION
minikube Ready master 9m58s v1.19.2
Finally, let’s enable the Kubernetes dashboard addon.
This addon deploys to the minikube cluster the official dashboard that provides a nice UX for interacting with the cluster and visualize the currently hosted apps. (even though we will continue to use kubectl through the article).
$ minikube addons enable dashboard
...
The 'dashboard' addon is enabled
Finally open the dashboard by running the following command on a second terminal
$ minikube dashboard
The browser will automatically launch with the dashboard:
Figure 2, Kubernetes dashboard running in your minikube environment
If you don’t want to or you can’t install the tools locally, you can choose a free online environment by using Katacoda.
Complete the four steps of the following scenario in order to get your own minikube instance hosted by Katacoda:
At the time of writing, the scenario is free and does not require you to create an account.
By completing all the scenario steps, you will create a Kubernetes cluster using minikube, enable the dashboard addon and launch the dashboard on another tab of your browser. By the end of the four simple steps, you will have a shell where you can run kubectl commands, and the dashboard opened in a separated tab.
Figure 3, setting up minikube online with Katacoda
I will try to note through the article when you will experience different behaviour and/or limitations when using the Katacoda online environment. This might be particularly the case when going through the networking and persistence sections.
Now that we have a working Kubernetes environment, we should put it to good use. Since one of the best ways to demystify Kubernetes is by using it, the rest of the article is going to be very hands on.
In the initial architecture section, we saw how in order to deploy a container, you need to create a Pod. That is because the minimum unit of deployment in a Kubernetes cluster is a Pod, rather than a container.
If this seems confusing, you can assume for the rest of the article that Pod == 1 container as we won’t use the more advanced features. As you get more comfortable with Kubernetes, the difference between Pod and container will became easier to understand.
In order to deploy a Pod, we will describe it using YAML. Create a file hello.yaml with the following contents:
apiVersion: v1
kind: Pod
metadata:
name: my-first-pod
spec:
containers:
- image: hello-world
name: hello
These files are called manifests. It is just YAML that describes one or more Kubernetes objects. In the previous case, the manifest file we have created contains a single object:
Let’s ask the cluster to host this container. Simply run the kubectl command
$ kubectl apply -f hello.yaml
pod/my-first-pod created
In many terminals you can use a Heredoc multiline string and directly pass the manifiest without the need to create a file. For example in mac/linux you can use:
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: my-first-pod
spec:
containers:
- image: hello-world
name: hello
EOF
Get comfortable writing YAML and applying it to the cluster. We will be doing that a lot through the article! If you want to keep editing and applying the same file, you just need to separate the different objects with a new line that contains: —
Once created, we can then use kubectl to inspect the list of existing Pods in the cluster. Run the following command:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
my-first-pod 0/1 CrashLoopBackOff 4 2m45s
Note, it shows as not ready and has status of CrashLoopBackOff. That is because Kubernetes expects Pod containers to keep running. However, the hello-world container we have used simply prints a message to the console and terminates. Kubernetes sees that as a problem with the Pod and tries to restart it.
There are specialized Kubernetes objects such as Jobs or Cronjobs that can be used for cases where containers are not expected to run forever.
Let’s verify that it indeed runs as expected by inspecting the logs of the pod’s container (i.e., what the container wrote to the standard output). Given we have just run the hello-world container, we expect to see a greeting message written to the console:
$ kubectl logs my-first-pod
Hello from Docker!
This message shows that your installation appears to be working correctly.
... omitted ...
For more examples and ideas, visit:
You can also see the Pod in the dashboard, as well as its container logs:
Figure 4, Inspecting the container logs in the dashboard
If you want to cleanup and remove the Pod, you can use either of these commands:
kubectl delete -f hello.yaml
kubectl delete pod my-first-pod
Now that we know what a Pod is and how to create them, lets introduce the Namespace object. The purpose of the Namespace is simply to organize and group the objects created in a cluster, like the Pod we created before.
If you imagine a real production cluster, there will be many Pods and other objects created. By grouping them in Namespaces, developers and administrators can define policies, permissions and other settings that affect all objects in a given Namespace, without necessarily having to list each individual object. For example, an administrator could create a production Namespace and assign different permissions and network policies to any object that belong to that namespace.
If you list the existing namespaces with the following command, you will see the cluster already contains quite a few Namespaces. That’s due to the system components needed by Kubernetes and the addons enabled by minikube:
$ kubectl get namespace
NAME STATUS AGE
default Active 125m
kube-node-lease Active 125m
kube-public Active 125m
kube-system Active 125m
kubernetes-dashboard Active 114m
You can see the objects in any of these namespaces by adding the -n {namespace} parameter to a kubectl command. For example, you can see the dashboard pods with:
kubectl get pod -n kubernetes-dashboard
This means in Kubernetes, every object you create has at least two metadata values which are used for identification and management:
The Pod we created earlier used the default namespace since no specific one was provided. Let’s create a couple of namespaces and a Pod inside each namespace:
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace1
---
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace2
---
apiVersion: v1
kind: Pod
metadata:
name: my-first-pod
namespace: my-namespace1
spec:
containers:
- image: hello-world
name: hello
---
apiVersion: v1
kind: Pod
metadata:
name: my-first-pod
namespace: my-namespace2
spec:
containers:
- image: hello-world
name: hello
There are a few interesting bits in the manifest above:
During the rest of the tutorial, we will keep things simple and use the default namespace.
Kubernetes would hardly achieve its state goals automate the deployment, scaling and management of containerized applications if you had to manually manage each single container.
While Pods are the foundation and the basic unit for deploying containers, you will hardly use them directly. Instead you would use a higher level abstraction like the Deployment object. The purpose of the Deployment is to abstract the creation of multiple replicas of a given Pod.
With a Deployment, rather than directly creating a Pod, you give Kubernetes a template for creating Pods, and specify how many replicas do you want. Let’s create one using an example ASP.NET Core container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: aspnet-sample-deployment
spec:
replicas: 1
selector:
matchLabels:
app: aspnet-sample
template:
metadata:
labels:
app: aspnet-sample
spec:
containers:
- image: mcr.microsoft.com/dotnet/core/samples:aspnetapp
name: app
This might seem complicated, but it’s a template for creating Pods like I mentioned above. The Deployment spec contains:
After applying the manifest, run the following two commands. Note how the name of the deployment matches the one in the manifest, while the name of the Pod is pseudo-random:
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
aspnet-sample-deployment 1/1 1 1 99s
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
aspnet-sample-deployment-868f89659c-cw2np 1/1 Running 0 102s
This makes sense since the number of Pods is completely driven by the replicas field of the Deployment, which can be increased/decreased as needed.
Let’s verify that Kubernetes will indeed try to keep the number replicas we defined. Delete the current pod with the following command (use the pods name generated in your system from the output of the command above):
kubectl delete pod aspnet-sample-deployment-868f89659c-cw2np
Then notice how Kubernetes has immediately created a new Pod to replace the one we just removed:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
aspnet-sample-deployment-868f89659c-7qwkz 1/1 Running 0 45s
Change the number of replicas to two and apply the manifest again. Notice how if you get the pods, this time there are two created for the Deployment:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
aspnet-sample-deployment-868f89659c-7bfxz 1/1 Running 0 1s
aspnet-sample-deployment-868f89659c-7qwkz 1/1 Running 0 2m8s
Now set the replicas to 0 and apply the manifest. This time there will be no pods! (might take a couple of seconds to cleanup)
$ kubectl get pod
No resources found in default namespace.
As you can see, Kubernetes is trying to make sure there is always as many Pods as the desired number of replicas.
While Deployment is one of the most common way of scaling containers, it is not the only way to manage multiple replicas. StatefulSets are designed for running stateful applications like databases. DaemonSets are designed to simplify the use case of running one replica in each node. CronJobs are designed to run a container on a given schedule.
These cover more advanced use cases and although outside of the scope of the article, you should take a look at them before planning any serious Kubernetes usage. Also make sure to check the section Beyond the Basics at the end of the article and explore concepts like resource limits, probes or secrets.
We have seen how to create Pods and Deployments that ultimately host your containers. Apart from running the containers, it’s likely you want to be able to talk to them. This might be either from outside the cluster or from other containers inside the cluster.
In the previous section, we deployed a sample ASP.NET Core application. Let’s begin to see if we can send an HTTP request to it. (Make sure to change the number of replicas back to 1).
Using the following command, we can retrieve the internal IP of the Pod:
$ kubectl describe pod aspnet-sample-deployment
Name: aspnet-sample-deployment-868f89659c-k7bvr
Namespace: default
... omitted ...
IP: 172.17.0.6
... omitted ...
In this case, the Pod has an internal IP of 172.17.0.6. Other Pods can send traffic by using this IP. Let’s try this by adding a busybox container with curl installed, so we can send an HTTP request. The following command adds the busybox container and opens a shell inside the container:
$ kubectl run -it --rm --restart=Never busybox --image=yauritux/busybox-curl sh
If you dont see a command prompt, try pressing enter.
/ #
The command asks Kubernetes to run the command “sh”, enabling the interactive mode -it (so you can attach your terminal) in a new container named busybox using a specific image yauritux/busybox-curl that comes with curl preinstalled. This gives you a terminal running inside the cluster, which will have access to internal addresses like that of the Pod. This container is removed as soon as you exit the terminal due to the –rm –restart=Never parameters.
Once inside, you can now use curl as in curl and you will see the HTML document returned by the sample ASP.NET Core application.
/home # curl
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Home page - aspnetapp</title>
<link rel="stylesheet" href="/lib/bootstrap/dist/css/bootstrap.min.css" />
<link rel="stylesheet" href="/css/site.css" />
</head>
<body>
... omitted ...
It is great that we can simply send requests using the IP of the Pod. However, depending on the Pod, IP is not a great idea:
To solve this problem, Kubernetes provides the Service object. This type of object provides a new abstraction that lets you:
· Provide a host name that you can use instead of the Pod IPs
· Load balance the requests across multiple Pods
Figure 5, simplified view of a default Kubernetes service
As you might suspect, a service is defined via its own manifest. Create a service for the deployment created before by applying the following YAML manifest:
apiVersion: v1
kind: Service
metadata:
name: aspnet-sample-service
spec:
selector:
app: aspnet-sample
ports:
- port: 80
targetPort: 80
The service manifest contains:
After you have created the service, you should see it when running the command kubectl get service.
Let’s now verify it is indeed allowing traffic to the aspnet-sample deployment. Run another busybox container with curl. Note how this time you can test the service with curl (which matches the service name)
$ kubectl run -it --rm --restart=Never busybox --image=yauritux/busybox-curl sh
If you dont see a command prompt, try pressing enter.
/ # curl
<!DOCTYPE html>
... omitted ...
If the service was located in a different namespace than the Pod sending the request, you can use as host name serviceName.namespace. That would be in the previous example.
Having a way for Pods to talk to other Pods is pretty handy. But I am sure you will eventually need to expose certain applications/containers to the world outside the cluster!
That’s why Kubernetes lets you define other types of services, where the default one we used in this section is technically a ClusterIP service). In addition to services, you can also use an Ingress to expose applications outside of the cluster.
We will see one of these services (the NodePort) and the Ingress in the next sections.
The simplest way to expose Pods to traffic coming from outside the cluster is by using a Service of type NodePort. This is a special type of Service object that maps a random port (by default in the range 30000-32767) in every one of the cluster nodes (Hence the name NodePort). That means you can then use the IP of any of the nodes and the assigned port in order to access your Pods.
Figure 6, NodePort service in a single node cluster like minikube
It is a very common use case to define these in an ad-hoc way, particularly during development. For this reason, let’s see how to create it using a kubectl shortcut rather than the YAML manifest. Given the deployment aspnet-sample-deployment we created earlier, you can create a NodePort service using the command:
kubectl expose deployment aspnet-sample-deployment --port=80 --type=NodePort
Once created, we need to find out to which node port was the service assigned:
$ kubectl get service aspnet-sample-deployment
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
aspnet-sample-deployment NodePort 10.110.202.168 <none> 80:30738/TCP 91s
You can see it was assigned port 30738 (this might vary in your machine).
Now in order to open the service in the browser, you would need to find the IP of any of the cluster nodes, for example using kubectl describe node. Then you would navigate to port 30738 in any of those node IPs.
However, when using minikube, we need to ask minikube to create a tunnel between the VM of our local cluster and our local machine. This is as simple as running the following command:
minikube service aspnet-sample-deployment
And as you can see, the sample ASP.NET Core application is up and running as expected!
Figure 7, the ASP.NET Core application exposed as a NodePort service
If you are running in Katacoda, you won’t be able to open the service in the browser using the minikube service command. However, once you get the port assigned to the NodePort service, you can open that port by clicking on the + icon at the top of the tabs, then click “Select port to view on Host 1”]
Figure 8, opening the NodePort service when using Katacoda
NodePort services are great for development and debugging, but not something you really want to depend on for your deployed applications. Every time the service is created, a random port is assigned, which could quickly become a nightmare to keep in sync with your configuration.
That is why Kubernetes provides another abstraction design for exposing primarily HTTP/S services outside the cluster, the Ingress object. The Ingress provides a map between a specific host name and a regular Kubernetes service. For example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webtool
spec:
rules:
- host: aspnet-sample-deployment.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: aspnet-sample-service
port:
number: 80
Note for versions prior to Kubernetes 1.19 (you could check the server version returned by kubectl version), the schema of the Ingress object was different. You would create the same Ingress as:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webtool
spec:
rules:
- host: aspnet-sample-deployment.io
http:
paths:
- backend:
serviceName: aspnet-sample-service
servicePort: 80
We are essentially mapping the host aspnet-sample-deployment.io to the very same regular service we created earlier in the article, when we first introduced the Service object.
How the Ingress works is via an Ingress controller deployed on every node of the cluster. This controller listens on port 80/443 and redirects requests to internal services based on the mapping from all the Ingress objects defined.
Figure 9, service exposed through an Ingress in a single node cluster
Now all you need is to create a DNS entry that maps the host name to the IP of the node. In clusters with multiple nodes, this is typically combined with some form of load balancer in front of all the cluster nodes, so you create DNS entries that point to the load balancer.
Let’s try this locally. Begin by enabling the ingress addon in minikube as in:
minikube addons enable ingress
Note in mac you might need to recreate your minikube environment using minikube start –vm=true
Then apply Ingress manifest defined above. Once the ingress is created, we are ready to send a request to the cluster node in port 80! All we need is to create a DNS entry, for which we will simply update our host file.
Get the IP of the machine hosting your local minikube environment:
$ minikube ip
192.168.64.13
Then update your hosts file to manually map the host name aspnet-sample-deployment.io to the minikube IP returned in the previous command (The hosts file is located at /etc/hosts in Mac/Linux and C:\Windows\System32\Drivers\etc\hosts in Windows). Add the following line to the hosts file:
192.168.64.13 aspnet-sample-deployment.io
After editing the file, open in your browser and you will reach the sample ASP.NET Core application through your Ingress.
Figure 10, the ASP.NET Core application exposed with an Ingress
The objects we have seen so far are the core of Kubernetes from a developer point of view. Once you feel comfortable with these objects, the next steps would feel much more natural.
I would like to use this section as a brief summary with directions for more advanced contents that you might want (or need) to explore next.
You can inject configuration settings as environment variables into your Pod’s containers. See the official docs. Additionally, you can read the settings from ConfigMap and Secret objects, which you can directly inject as environment variables or even mount as files inside the container.
We briefly mentioned at the beginning of the article that Pods can contain more than one container. You can explore patterns like init containers and sidecars to understand how and when you can take advantage of this.
You can also make both your cluster and Pods more robust by taking advantage of:
Any data stored in a container is ephemeral. As soon as the container is terminated, that data will be gone. Therefore, if you need to permanently keep some data, you need to persist it somewhere outside the container. This is why Kubernetes provides the PersistentVolume and PersistentVolumeClaim abstractions.
While the volumes can be backed by folders in the cluster nodes (using emptyDir volume type), these are typically backed by cloud storage such as AWS EBS or Azure Disks.
Also remember that we mentioned StatefulSets as the recommended workload (rather than Deployments) for stateful applications such as databases.
In the article, we have only used containers that were publicly available in Docker Hub. However, it is likely that you will be building and deploying your own application containers, which you might not want to upload to a public docker registry like Docker Hub. In those situations:
Another important aspect will be describing your application as a set of YAML files with the various objects. I would suggest getting familiar with Helm and considering helm charts for that purpose.
Finally, this might be a good time to think about CI/CD.
Many different public clouds provide Kubernetes services. You can get Kubernetes as a service in Azure, AWS, Google cloud, Digital Ocean and more.
In addition, Kubernetes has drivers which implement features such as persistent volumes or load balancers using specific cloud services.
External-dns and cert-manager are great ways to automatically generate both DNS entries and SSL certificates directly from your application manifest.
Velero is an awesome tool for backing up and restoring your cluster, including data in persistent volumes.
The Prometheus and Grafana operator provide the basis for monitoring your cluster. With the fluentd operator you can collect all logs are send them to a persistent destination.
Network policies, RBAC and resource quotas are the first stops when sharing a cluster between multiple apps and/or teams.
If you want to secure your cluster, trivy and anchore can scan your containers for vulnerabilities, falco can provide runtime security and kube-bench runs an audit of the cluster configuration.
..and many, many more than I can remember or list here.
Kubernetes can be quite intimidating to get started with. There is plenty of new concepts and tools to get used to, which can make running a single container for the first time a daunting task.
However, it is possible to focus on the most basic functionality and elements that let you, a developer, deploy an application an interact with it.
I have spent the last year helping my company transition to Kubernetes, with dozens of developers having to ramp up in Kubernetes in order to achieve their goals. In my experience, getting these fundamentals right, is key. Without them, I have seen individuals and teams getting blocked or going down the wrong path, both ending in frustration and wasted time.
Regardless of whether you are simply curious about Kubernetes or embracing it at work, I hope this article helped you getting these basic concepts and sparked your interest. In the end, the contents here are but a tiny fraction of everything you could learn about Kubernetes!!
|
https://www.dotnetcurry.com/aspnet-core/kubernetes-for-developers
|
CC-MAIN-2022-27
|
refinedweb
| 4,858
| 50.46
|
The code submitted presents two managed .NET C# classes,
MessageBeep() and
Beep(), which emulate the Win32 platform calls of the same names. Installation of Microsoft DirectX 9.0 is required. The code has been tested on Windows XP and Win2K using standard AC97 onboard sound cards.
The .NET Window Forms test application, MessageBeepTest, plays the six MessageBeep sounds using both managed DirectX and unmanaged PInvoke versions. If successful, the six sounds will sound the same using the two techniques (the sounds can be changed or turned off using the Windows Control Panel).
Additionally, the Beep function will play a sine or square wave of a specified frequency and duration using both methods.
Ever since the Beta 2 days, one of the questions asked most frequently in the C# newsgroups is "Where is the MessageBeep?", "Where is the MessageBeep?", and again "Where is the MessageBeep?" Until the DirectX 9.0 SDK was released last month, the answer has always been either:
The former solution is of course unmanaged and the latter solution is of course, for a C# programmer, an abomination. Tacit admission that your chosen .NET language is somehow--incomplete. With the release of DirectX 9.0 and some time made free over the recent holiday season, I was allowed to investigate if indeed the MessageBeep question could be answered once and for all.
If I've done it correctly, the MessageBeep code was actually very easy. Some research on the web was required to find where in the Windows Registry the sounds are mapped to the six sound types: System Asterisk, System Exclamation, System Hand, System Question, System Default and Simple Beep.
// From winuser.h
#define MB_OK 0x00000000L
#define MB_ICONHAND 0x00000010L
#define MB_ICONQUESTION 0x00000020L
#define MB_ICONEXCLAMATION 0x00000030L
#define MB_ICONASTERISK 0x00000040L
#define Simple Beep 0xffffffffL
becomes:
public enumBeepTypes : long
{
MB_OK = 0x0,
MB_ICONHAND = 0x00000010,
MB_ICONQUESTION = 0x00000020,
MB_ICONEXCLAMATION = 0x00000030,
MB_ICONASTERISK = 0x00000040,
SIMPLE_BEEP = 0xffffffff
};
Then just cause the DirectX DirectSound function to be run on a background thread and as quick as "Bob's your Uncle**" you're finished. One difference between the techniques is that if no sound card is available, the SimpleBeep sound is generated using an internal speaker in the unmanaged version. I found no way to access the internal speaker using DirectX. An AC97 hardware independent way of accessing the speaker using unsafe pointers and a managed wrapper might be possible but I'm not sure how widespread the use of the AC97 standard is.
**This CP phrase is becomming popular in general use and I have no idea what it means.
Implementing the
Beep() class was more difficult. The concept is simple enough, use a DirectX secondary buffer in the "looping" mode and write the tone samples to an area in the buffer DirectX isn't using at the moment.
The problem was with playing the sound the second time. A small portion of the last tone is heard at the beginning of the new tone. But it is also heard when a different secondary buffer is played! For instance when a buffer created for a MessageBeep sound is played after Beep. The problem seems to be fixed by filling the Beep buffer with zeros when finished and playing the buffer one last time with looping disabled.
I've limited the sample rate to 8K samples per second (sps) with a maximum tone of 4KHz. But I've run the code at 32K sps on a 1 GHz PIII without problems. A sixteen bit PCM word is written out in stereo.
But as with MessageBeep the implementation isn't quite the same as the unmanaged version. The unmanaged Beep puts out a square wave. I create a sine wave and drive the signal to the rails when zero is crossed to generate a square wave. It doesn't sound the same at all frequencies. Some work needs to be done to create the same sound but I like the sine wave myself.
A necessary tool for any development work with pc sound is Cool Edit 2000 from Syntrillium (and for this plug, I fully expect my free version of Cool Edit Pro to be in the mail). An improvement in the sine wave could be made by stopping the sound when it is at zero or by using a low pass filter. The little bit at the end, as shown below, can be heard as a pop.
The only method in the
MessageBeep() class is
MessageBeep(BeepTypes mbType).
Beep() is called as:
void Beep() or void
Beep(double frequency, System.Int32 duration) other methods set frequency, volume, tone duration and turn the square wave mode off and on.
The
MessageBeep() function returns immediately and the
Beep() function returns when the tone is complete as do the Win32 functions.
The two classes are in the Win32Emu namespace (Emu for emulation).
I'm not sure if I've answered the question "Where is the MessageBeep?" to everyone's satisfaction but it's a start.
Version 1.1.0.0/030128 -- Initial release.Ooops. I may have left the crypto key on in the build. Turn it off in your assembly. I'll update later. The frequency for C# in the key of C is 277Hz. I'll put that as the default.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/cs/messagebeep.aspx
|
crawl-002
|
refinedweb
| 879
| 64.51
|
In this article, we are going to talk about
string formatting. The standard way of doing this in C is the old
sprintf function. It has various flaws and is showing its age. C++ and STL introduce the
iostreams and the
<< operator. While convenient for simple tasks, its formatting features are clunky and underpowered.
On the other hand, we have the .NET Framework with its
String class, which has the formatting function
String.Format[^]. It is safer and easier to use than
sprintf - but can only be used from managed code. This article will show the main problems of
sprintf and will offer an alternative that can be used from native C++ code.
There are different versions of
sprintf that provide different degrees of buffer overflow protection. The basic flavor of
sprintf provides none. It will happily write past the end of the given buffer and will probably crash the program. The
_snprintf function will not write past the end of the buffer, but will also not put a zero at the end if there is no space. The program will not crash immediately but will most likely crash later. The new
_sprintf_s function fixes the buffer overflow problems but it is only available for Visual Studio 2005 and up.
String.Format allocates the output buffer itself from the managed heap and can make it as big as it needs to.
The
sprintf function uses the ellipsis syntax (
...) to accept variable number of arguments. The downside is that the function has no direct information about the arguments' types and can't perform any validation. It assumes that the argument count and types match the formatting
string. This can lead to hard to spot bugs. For example:
std::string userName("user1"); int userData=0; // These will compile and often run, but will produce wrong result // the type of the arguments don't match the format sprintf(buf,"user %d, data %s",userName.c_str(),userData); // the string is missing .c_str() sprintf(buf,"user %s, data %d",userName,userData);
In
String.Format the formats of the arguments are optional. If the argument is a
string it will be printed as a
string, if it is a number it will be printed as a number.
// The .NET equivalent: String.Format("user {0}, data {1}",userName,userData);
The
sprintf function requires that the order of the arguments is exactly the same as the order of the format specifiers. The bad news is that different languages have different word order. The program needs to provide the arguments in different order to accommodate different languages. For example:
// English sprintf(buf,"The population of %s is %d people.","New York",20000000); // But maybe in some other language it has to be: sprintf(buf,"%d people live in %s.",20000000,"New York"); // the order is different
String.Format wins in this case too. Its format items explicitly specify which argument to use and can do that in any order.
// The .NET equivalent - same code can be used for both languages, // just the formatting string needs to change: String.Format("The population of {0} is {1} people.","New York",20000000); String.Format("{1} people live in {0}.","New York",20000000);
The
FormatString function is a smart and type-safe alternative to
sprintf that can be used by native C++ code. It is used like this:
FormatString(buffer, buffer_size_in_characters, format, arguments...);
The function has two versions - a
char version and a
wchar_t version.
The format
string contains items similar to
String.Format:
{index[,width][:format][@comment]}
index is the zero-based index in the argument list. If the
index is past the last argument,
FormatString will assert.
width is optional width of the result. If
width is less than zero, the result will be left-aligned. The
width can be in the format '*<index>'. Then <index> must be an index of another argument in the list that provides the
width value.
format is optional format of the result. The available formats depend on the argument type. If the
format is not supported for the given argument
FormatString will assert.
comment is ignored. It can be a hint that describes the meaning of the argument, or provides examples to aid the localization of the formatting
string.
The result of
FormatString always fits in the provided buffer and is always zero-terminated. Special cases like the buffer ending in the middle of a double-byte character or a in the middle of a surrogate pair are also handled.
Since the { and } characters are used to define format items, they need to be escaped in the format string as {{ and }}.
string
GetNumberFormat[^] but with no fractional digits)
StrFormatByteSize[^])
StrFormatKBSize[^])
StrFromTimeInterval[^] with optional number of significant digits between 1 and 6)
The default format for signed integers is 'd' and for unsigned integers is 'u'.
<index>is an index of another argument that provides the number of fractional digits
GetCurrencyFormat[^])
GetNumberFormatwith optional number of fractional digits)
The default format for floats or doubles is 'f'.
The
char version of
FormatString doesn't support any formats for ANSI strings. The
wchar_t version supports:
If a code page is not given, the default (
CP_ACP) is used.
The
wchar_t version of
FormatString doesn't support any formats for UNICODE strings. The
char version supports:
If a code page is not given, the default (
CP_ACP) is used.
GetDateFormat[^]). 'l' - converts the time from UTC to local. 'f' - same as 'l' but uses the file system rules *. format - optional format passed to
GetDateFormat
GetTimeFormat[^])
* 'l' uses
SystemTimeToTzSpecificLocalTime to convert from UTC to local time. 'f' uses
FileTimeToLocalFileTime instead. The difference is that
FileTimeToLocalFileTime uses the current daylight savings settings instead of the settings at the given date. This is incorrect but is more consistent with the way Windows displays the local file times. If
STR_USE_WIN32_TIME is not defined, then the
localtime function is used no matter if 'l' or 'f' is specified.
localtime produces results consistent with the file system (and
FileTimeToLocalFileTime). You can read why the file system behaves this way here: The Old New Thing: Why Daylight Savings Time is nonintuitive .
The default format for
SYSTEMTIME is 'd'.
char buf[100]; // The order of the arguments can change FormatString(buf,100,"{1} people live in {0}.","New York",20000000); -> 20000000 people live in New York. // Signed values are printed as signed FormatString(buf,100,"{0}",-1); -> -1 // Unsigned values are printed as unsigned FormatString(buf,100,"{0}",(unsigned int)-1); -> 4294967295 // The same argument can be used more than once FormatString(buf,100,"{0}, 0x{0,8:X0}",1); -> 1, 0x00000001 // UNICODE text can be converted to ANSI FormatString(buf,100,"{0}",L"test"); -> test // Localized integer number FormatString(buf,100,"{0:n}",12345678); -> 12,345,678 // Time interval FormatString(buf,100,"{0:t3}",12345678); -> 3 hr, 25 min // Floating point number FormatString(buf,100,"{0}",12345.678); -> 12345.678000 // Localized floating point number FormatString(buf,100,"{0:n*1}",12345.678,2); -> 12,345.68 // Show current time SYSTEMTIME st; GetSystemTime(&st); FormatString(buf,100,"{0:dl} {0:tl}",st); -> 11/25/2006 1:26 PM // Use custom date format FormatString(buf,100,"{0:ddddd',' MMM dd yy}",st); -> Saturday, Nov 25 06
The
FormatString function has 10 optional arguments
arg1, ... arg10 of type
const CFormatArg & like this:
class CFormatArg { public: CFormatArg( void ); CFormatArg( char x ); CFormatArg( unsigned char x ); CFormatArg( short x ); CFormatArg( unsigned short x ); .......... enum { TYPE_NONE=0, TYPE_INT=1, TYPE_UINT=2, ..... }; union { int i; __int64 i64; double d; const char *s; const wchar_t *ws; const SYSTEMTIME *t; }; int type; static CFormatArg s_Null; ; int FormatString( char *string, int len, const char *format, const CFormatArg &arg1=CFormatArg::s_Null, ..., const CFormatArg &arg10=CFormatArg::s_Null );
The
CFormatArg class contains constructors for each of the supported types. Each constructor sets the
type member depending on the type of its argument. When the
FormatString function is called with an actual argument, a temporary
CFormatArg object is created that stores the value and the type of the argument. The
FormatString function can then determine the number of arguments that are provided and has access to their types and values.
Often you don't want to use a buffer of a fixed size, but one that is dynamically allocated. Use the
FormatStringAlloc function instead:
char *string=FormatStringAlloc(alocator, format, arguments );
The first parameter is an object with a virtual member function responsible for allocating and growing the
string buffer:
class CFormatStringAllocator { public: virtual bool Realloc( void *&ptr, int size ); static CFormatStringAllocator g_DefaultAllocator; }; bool CFormatStringAllocator::Realloc( void *&ptr, int size ) { void *res=realloc(ptr,size); if (ptr && !res) free(ptr); ptr=res; return res!=NULL; }
The
Realloc member function must reallocate the buffer pointed by
ptr with the given size (in bytes) and set
ptr to the new address. The allocator will be called every 256 characters (approximately) to enlarge the buffer. The first time
Realloc is called with
ptr=NULL. If error occurs,
Realloc must free the memory pointed by
ptr and return
false or throw an error. If
Realloc returns
false then
FormatStringAlloc terminates and returns
NULL.
The default allocator uses the
realloc function from the C run-time heap. To free the returned
string, you need to call
free(string). You can write your own allocator that uses a different heap or some other means of allocating memory. See further below for one example.
Often you don't want to output the formatted
string to a buffer, but to a file, to a text console, to the Visual Studio's debug window, etc. Use the
FormatStringOut function instead:
bool success=FormatStringOut(output, format, arguments );
The first parameter is an object with a virtual member function responsible for outputting portions of the result. There are separate classes for
char and
wchar_t:
// char version class CFormatStringOutA { public: virtual bool Output( const char *text, int len ); static CFormatStringOutA g_DefaultOut; }; bool CFormatStringOutA::Output( const char *text, int len ) { for (int i=0;i<len;i++) if (putchar(text[i])==EOF) return false; return true; } // wchar_t version class CFormatStringOutW { public: virtual bool Output( const wchar_t *text, int len ); static CFormatStringOutA g_DefaultOut; }; bool CFormatStringOutW::Output( const wchar_t *text, int len ) { for (int i=0;i<len;i++) if (putwchar(text[i])==WEOF) return false; return true; }
The
Output member function will be called with each portion of the result. The
len parameter is the number of characters. Note that the text is not guaranteed to be zero-terminated.
Output must return
false or throw an exception if there is an error. If
Output returns
false then
FormatStringOut terminates and returns
false.
The default implementations just use
putchar/
putwchar to send the text to the console. You can write your own output class for
iostream,
FILE*, Win32
HANDLE, etc.
The
CFormatTime class derives from
CFormatArg and allows you to use different date/time formats. You use it like this:
time_t t=time(); FormatString(buf, 100, "local time: {0:dl} {0:tl}", CFormatTime(t)); -> local time: 11/25/2006 1:26 PM
You can create your own classes that derive from
CFormatArg to support more data types or add more formatting options.
FormatString.h defines 3 macros to be used with the argument list:
FORMAT_STRING_ARGS_H
FORMAT_STRING_ARGS_CPPand
FORMAT_STRING_ARGS_PASS
You can use them to create other functions that have variable argument list and call
FormatString. For example, let's create a
MessageBox function that can format the message:
// in your header file int MessageBox( HWND parent, UINT type, LPCTSTR caption, LPCTSTR format, FORMAT_STRING_ARGS_H ); // in your cpp file int MessageBox( HWND parent, UINT type, LPCTSTR caption, LPCTSTR format, FORMAT_STRING_ARGS_CPP ) { TCHAR *text=FormatStringAlloc(CFormatStringAllocator::g_DefaultAllocator, format, FORMAT_STRING_ARGS_PASS); int res=MessageBox(parent,text,caption,type); free(text); return res; }
If
FormatString and its siblings are called with no variable arguments, the format
string is directly copied to the output. In the example above, you can call
MessageBox(parent, type, caption, text) and the text will be displayed in the message box directly without being parsed for any format items.
The sample sources provide simple
string container classes
CStringA and
CStringW. The
strings stored in them have a reference count in the 4 bytes directly preceding the first character. When such a class is copied, the
string is not duplicated, just the reference count is incremented (so called copy-on-write with reference counting). When the
string is destroyed, the reference count is decremented and if it reaches 0, the memory is freed. The reference count is modified with
InterlockedIncrement and
InterlockedDecrement to be thread-safe.
The
CString type is set to
CStringA in ANSI configurations and to
CStringW in UNICODE configurations. This allows you to use the configuration-dependent
CString, while still being able to mix the ANSI and UNICODE types as needed.
The
CString classes have a
Format member function that formats a
string and assigns the result to the object. This is done by calling
FormatStringAlloc with a special allocator that allocates 4 bytes more than requested to store the reference count. The
CString classes also define a cast
operator CFormatArg, so they can be used directly as arguments to
FormatString:
CString s; s.Format(_T("{0}"),"test"); FormatStringOut(CFormatStringOutA::g_DefaultOut,"s=\"{0}\"\n",s); -> s="test"
The behavior ot
CString is very similar to the ATL/MFC
strings and is provided here merely to demonstrate the use of custom memory allocators for
FormatStringAlloc and the use of the
CFormatArg cast operator. To use them in a real application, you may wish to add more functionality, like comparison operators, conversion operators/constructors between
CStringA and
CStringW,
string manipulation functionality, etc. Or simply use the existing classes
std::string or
ATL::CString.
The source files contain a set of
string utilities that can be used independently from
FormatString. Most of them are wrappers for the system
string functions. The functions come in pairs - one for ANSI and one for UNICODE, like this:
inline int Strlen( const char *str ) { return (int)strlen(str); } inline int Strlen( const wchar_t *str ) { return (int)wcslen(str); } int Strcpy( char *dst, int size, const char *src ); int Strcpy( wchar_t *dst, int size, const wchar_t *src );
The advantage of this approach over
_tcslen and
_tcscpy is that you can easily mix ANSI and UNICODE code and always use the same function name.
Other wrappers provide safe versions of
strncpy,
sprintf,
strcat, etc. that don't write past the provided buffer and always leave the result zero-terminated. They all compile cleanly under VC 6.0, VS 2003 and VS 2005.
These functions output the formatted result to an STL
string:
std::string FormatStdString( const char *format, ... ); std::wstring FormatStdString( const wchar_t *format, ... ); void FormatStdString( std::string &string, const char *format, ... ); void FormatStdString( std::wstring &string, const wchar_t *format, ... );
You can output formatted
string to STL
streams like this:
stream << StdStreamOut(format, parameters) << ...;
To use the source code, just drop the .h and .cpp files into your project:
stringhelper functions. They can be used on their own.
stringformatting functionality. Requires
StringUtils
stringcontainer classes. Requires
StringUtilsand
FormatString
StringUtils.h defines several macros that can be used to enable or disable parts of the functionality:
Win32functions
WideCharToMultiByteand
MultiByteToWideCharto convert between
charand
wchar_tstrings. Otherwise, it will use
wcstombsand
mbstowcs. The advantage of using
Win32function is that they support conversions between Unicode and different code pages, including UTF8.
FormatStringfunctions will use the
Win32functionality for formatting numbers, dates and times. Otherwise they will try to simulate their functionality to some extent.
FormatStringfunctions will support the time types
time_t,
SYSTEMTIME,
FILETIMEand
DATE. Otherwise only
time_twill be supported.
IsDBCSLeadByteto handle DBCS characters. Otherwise
isleadbytewill be used.
FormatStringfunctions will support
std::stringand
std::wstringas input parameters. Also
FormatStdStringand
StdStreamOutwill be defined that output to
std::string,
std::wstring,
std::ostreamand
std::wostream.
With these macros, you can selectively enable only the functionality you need and is supported by your compiler or platform.
FormatStringimplementation for
charand
wchar_t
strings and time formats
streams
strings and
streams
wchar_t
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/string/FormatString.aspx
|
crawl-002
|
refinedweb
| 2,645
| 55.54
|
"Lars J. Aas" <address@hidden> writes: > I have a problem with the AS_ESCAPE macro. I figured the AS_* namespace > was reserved for shell-time (configure-time) macros, but AS_ESCAPE is > clearly an m4 autoconf-time macro. I haven't looked over the other > AS_* macros, so this may be true for more than AS_ESCAPE, but I suggest > keeping AS_* for shell-time macros like AS_MKDIR_P and AS_DIRNAME, and > move AS_ESCAPE to m4_escape/m4_string_escape/m4[_]sh_escape or something > similar. The AS_* macros should work with shell-variable arguments the > way I see it... > My idea is that AS is related to the shell, in anyway. But I agree with your concern. But it's a valid point. We could have m4_escape, indeed, m4_dirname as presented by Derek, and import AC_INDIR_IF into sh.m4, and have AS_DIRNAME and AS_ESCAPE polymorph depending whether the computation can be done at m4 time or sh. Any patch would be accepted for m4_escape :)
|
http://lists.gnu.org/archive/html/autoconf/2001-02/msg00047.html
|
CC-MAIN-2013-48
|
refinedweb
| 158
| 82.14
|
I am writing a C++ program. I am using the function "system("PAUSE"); ". I was wondering how I could make it do the same thing just not the the
words " Press any key to continue....".
I don't know if that's possible, but try this:
Set the program into a loop, until whatever condition is met. (ie while x do y, if z do whatever.)
This way, you will have full control over both the message (if any) and the action to resume the program.
why use the system("PAUSE"); command, if the only thing you want is wait for a keystroke, then do it like this:
Code:
#include <stdio.h>
int main(void)
{
char c;
c=getch();
return 0;
}
the getch() function works only on windows (with the standard libraries at least) and otherwise you could use the getchar() function.
this way the system is waiting for a keystroke before the program continues...
#include <stdio.h>
int main(void)
{
char c;
c=getch();
return 0;
}
you also need the conio.h header file included in order to use getch()
ac
yes, thats right (if your programming for linux). otherwise you could use the getchar() function for it...
Forum Rules
|
http://www.antionline.com/showthread.php?259589-Ofstream-help!!!&goto=nextoldest
|
CC-MAIN-2015-18
|
refinedweb
| 201
| 80.82
|
In this lecture, we consider an extension of the previously studied job search model of McCall [McC70].
In the McCall model, an unemployed worker decides when to accept a permanent position at a specified wage, given
- his or her discount factor
- the level of unemployment compensation
- the distribution from which wage offers are drawn
In the version considered below, the wage distribution is unknown and must be learned.
Let’s start with some imports
from numba import njit, prange, vectorize from interpolation import mlinterp, interp from math import gamma import numpy as np import matplotlib.pyplot as plt %matplotlib inline from matplotlib import cm
The Basic McCall Model¶
Recall that, in the baseline model, an unemployed worker is presented in each period with a permanent job offer at wage $ W_t $.
At time $ t $, our worker either
- accepts the offer and works permanently at constant wage $ W_t $
- rejects the offer, receives unemployment compensation $ c $ and reconsiders next period
The wage sequence $ \{W_t\} $ is IID and generated from known density $ q $.
The worker aims to maximize the expected discounted sum of earnings $ \mathbb{E} \sum_{t=0}^{\infty} \beta^t y_t $ The function $ V $ satisfies the recursion
$$ v(w) = \max \left\{ \frac{w}{1 - \beta}, \, c + \beta \int v(w')q(w') dw' \right\} \tag{1} $$
The optimal policy has the form $ \mathbf{1}\{w \geq \bar w\} $, where $ \bar w $ is a constant depending called the reservation wage.
Offer Distribution Unknown¶
Now let’s extend the model by considering the variation presented in [LS18], section 6.6.
The model is as above, apart from the fact that
- the density $ q $ is unknown
- the worker learns about $ q $ by starting with a prior and updating based on wage offers that he/she observes
The worker knows there are two possible distributions $ F $ and $ G $ — with densities $ f $ and $ g $.
At the start of time, “nature” selects $ q $
$$ \pi_{t+1} = \frac{\pi_t f(w_{t+1})}{\pi_t f(w_{t+1}) + (1 - \pi_t) g(w_{t+1})} \tag{2} $$
This last expression follows from Bayes’ rule, which tells us that$$ \mathbb{P}\{q = f \,|\, W = w\} = \frac{\mathbb{P}\{W = w \,|\, q = f\}\mathbb{P}\{q = f\}} {\mathbb{P}\{W = w\}} \quad \text{and} \quad \mathbb{P}\{W = w\} = \sum_{\omega \in \{f, g\}} \mathbb{P}\{W = w \,|\, q = \omega\} \mathbb{P}\{q = \omega\} $$
The fact that (2) is recursive allows us to progress to a recursive solution method.
Letting$$ q_{\pi}(w) := \pi f(w) + (1 - \pi) g(w) \quad \text{and} \quad \kappa(w, \pi) := \frac{\pi f(w)}{\pi f(w) + (1 - \pi) g(w)} $$
we can express the value function for the unemployed worker recursively as follows
$$ v(w, \pi) = \max \left\{ \frac{w}{1 - \beta}, \, c + \beta \int v(w', \pi') \, q_{\pi}(w') \, dw' \right\} \quad \text{where} \quad \pi' = \kappa(w', \pi) \tag{3} $$
Notice that the current guess $ \pi $ is a state variable, since it affects the worker’s perception of probabilities for future rewards.
def beta_function_factory(a, b): @vectorize def p(x): r = gamma(a + b) / (gamma(a) * gamma(b)) return r * x**(a-1) * (1 - x)**(b-1) return p x_grid = np.linspace(0, 1, 100) f = beta_function_factory(1, 1) g = beta_function_factory(3, 1.2) fig, ax = plt.subplots(figsize=(10, 8)) ax.plot(x_grid, f(x_grid), label='$f$', lw=2) ax.plot(x_grid, g(x_grid), label='$g$', lw=2) ax.legend() plt.show().
The class
SearchProblem is used to store parameters and methods needed to compute optimal actions.
class SearchProblem: """ A class to store a given parameterization of the "offer distribution unknown" model. """ def __init__(self, β=0.95, # Discount factor c=0.3, # Unemployment compensation F_a=1, F_b=1, G_a=3, G_b=1.2, w_max=1, # Maximum wage possible w_grid_size=100, π_grid_size=100, mc_size=500): self.β, self.c, self.w_max = β, c, w_max self.f = beta_function_factory(F_a, F_b) self.g = beta_function_factory(G_a, G_b) self.π_min, self.π_max = 1e-3, 1-1e-3 # Avoids instability self.w_grid = np.linspace(0, w_max, w_grid_size) self.π_grid = np.linspace(self.π_min, self.π_max, π_grid_size) self.mc_size = mc_size self.w_f = np.random.beta(F_a, F_b, mc_size) self.w_g = np.random.beta(G_a, G_b, mc_size)
The following function takes an instance of this class and returns jitted versions
of the Bellman operator
T, and a
get_greedy() function to compute the approximate
optimal policy from a guess
v of the value function
def operator T(v): """ The Bellman operator. """ v_func = lambda x, y: mlinterp((w_grid, π_grid), v, (x, y)) v_new = v_new[i, j] = max(v_1, v_2) return v_new @njit(parallel=parallel_flag) def get_greedy(v): """" Compute optimal actions taking v as the value function. """ v_func = lambda x, y: mlinterp((w_grid, π_grid), v, (x, y)) σ = σ[i, j] = v_1 > v_2 # Evaluates to 1 or 0 return σ return T, get_greedy
We will omit a detailed discussion of the code because there is a more efficient solution method that we will use later.
To solve the model we will use the following function that iterates using T to find a fixed point
def solve_model(sp, use_parallel=True, tol=1e-4, max_iter=1000, verbose=True, print_skip=5): """ Solves for the value function * sp is an instance of SearchProblem """ T, _ = operator_factory(sp, use_parallel) # Set up loop i = 0 error = tol + 1 m, n = len(sp.w_grid), len(sp.π_grid) # Initialize v v = np.zeros((m, n)) + sp.c / (1 - sp.β)
Let’s look at solutions computed from value function iteration
sp = SearchProblem() v_star = solve_model(sp) fig, ax = plt.subplots(figsize=(6, 6)) ax.contourf(sp.π_grid, sp.w_grid, v_star, 12, alpha=0.6, cmap=cm.jet) cs = ax.contour(sp.π_grid, sp.w_grid, v_star, 12, colors="black") ax.clabel(cs, inline=1, fontsize=10) ax.set(xlabel='$\pi$', ylabel='$w$') plt.show()
Error at iteration 5 is 0.6352858910475305. Error at iteration 10 is 0.10435221521733062. Error at iteration 15 is 0.022988668724414296. Error at iteration 20 is 0.00549231323453192. Error at iteration 25 is 0.001321784895397471. Error at iteration 30 is 0.00031813569478877923. Error at iteration 35 is 7.657146478301513e-05. Converged in 35 iterations.
T, get_greedy = operator_factory(sp) σ_star = get_greedy(v_star) fig, ax = plt.subplots(figsize=(6, 6)) ax.contourf(sp.π_grid, sp.w_grid, σ_star, 1, alpha=0.6, cmap=cm.jet) ax.contour(sp.π_grid, sp.w_grid, σ_star, 1, colors="black") ax.set(xlabel='$\pi$', ylabel='$w$') ax.text(0.5, 0.6, 'reject') ax.text(0.7, 0.9, 'accept') plt.show()
The results fit well with our intuition from section looking forward.
- The black line in the figure above corresponds to the function $ \bar w(\pi) $ introduced there.
- It is decreasing as expected.
Take 2: A More Efficient Method¶
Let’s consider another method to solve for the optimal policy.
We will use iteration with an operator that has:
$$ \frac{\bar w(\pi)}{1 - \beta} = c + \beta \int v(w', \pi') \, q_{\pi}(w') \, dw' \tag{4} $$
Together, (3) and (4) give
$$ v(w, \pi) = \max \left\{ \frac{w}{1 - \beta} ,\, \frac{\bar w(\pi)}{1 - \beta} \right\} \tag{5} $$
Combining (4) and (5), we obtain$$ \frac{\bar w(\pi)}{1 - \beta} = c + \beta \int \max \left\{ \frac{w'}{1 - \beta} ,\, \frac{\bar w(\pi')}{1 - \beta} \right\} \, q_{\pi}(w') \, dw' $$
Multiplying by $ 1 - \beta $, substituting in $ \pi' = \kappa(w', \pi) $ and using $ \circ $ for composition of functions yields
$$ \bar w(\pi) = (1 - \beta) c + \beta \int \max \left\{ w', \bar w \circ \kappa(w', \pi) \right\} \, q_{\pi}(w') \, dw' \tag{6} $$] $
- $ \| \omega \| := \sup_{x \in [0,1]} | \omega(x) | $
Consider the operator $ Q $ mapping $ \omega \in b[0,1] $ into $ Q\omega \in b[0,1] $ via
$$ (Q \omega)(\pi) = (1 - \beta) c + \beta \int \max \left\{ w', \omega \circ \kappa(w', \pi) \right\} \, q_{\pi}(w') \, dw' \tag{7} $$
Comparing (6) and (7), we see that the set of fixed points of $ Q $ exactly coincides with the set of solutions to the RWFE.
Moreover, for any $ \omega, \omega' \in b[0,1] $, basic algebra and the triangle inequality for integrals tells us that
$$ |(Q \omega)(\pi) - (Q \omega')(\pi)| \leq \beta \int \left| \max \left\{w', \omega \circ \kappa(w', \pi) \right\} - \max \left\{w', \omega' \circ \kappa(w', \pi) \right\} \right| \, q_{\pi}(w') \, dw' \tag{8} $$
Working case by case, it is easy to check that for real numbers $ a, b, c $ we always have
$$ | \max\{a, b\} - \max\{a, c\}| \leq | b - c| \tag{9} $$
Combining (8) and (9) yields
$$ |(Q \omega)(\pi) - (Q \omega')(\pi)| \leq \beta \int \left| \omega \circ \kappa(w', \pi) - \omega' \circ \kappa(w', \pi) \right| \, q_{\pi}(w') \, dw' \leq \beta \| \omega - \omega' \| \tag{10} $$
Taking the supremum over $ \pi $ now gives us
$$ \|Q \omega - Q \omega'\| \leq \beta \| \omega - \omega' \| \tag{11} $$
In other words, $ Q $ is a contraction of modulus $ \beta $ on the complete metric space $ (b[0,1], \| \cdot \|) $.
Hence
- A unique solution $ \bar w $ to the RWFE exists in $ b[0,1] $.
- $ Q^k \omega \to \bar w $ uniformly as $ k \to \infty $, for any $ \omega \in b[0,1] $.
def Q Q(ω): """ Updates the reservation wage function guess ω via the operator Q. """ ω_func = lambda p: interp(π_grid, ω, p) ω_new = np.empty_like(ω) for i in prange(len(π_grid)): π = π_grid[i] integral_f, integral_g = 0.0, 0.0 for m in prange(mc_size): integral_f += max(w_f[m], ω_func(κ(w_f[m], π))) integral_g += max(w_g[m], ω_func(κ(w_g[m], π))) integral = (π * integral_f + (1 - π) * integral_g) / mc_size ω_new[i] = (1 - β) * c + β * integral return ω_new return Q
In the next exercise, you are asked to compute an approximation to $ \bar w $.
Exercise 1¶
Use the default parameters and
Q_factory to compute an optimal policy.
Your result should coincide closely with the figure for the optimal policy shown above.
Try experimenting with different parameters, and confirm that the change in the optimal policy coincides with your intuition.
def solve_wbar(sp, use_parallel=True, tol=1e-4, max_iter=1000, verbose=True, print_skip=5): Q = Q_factory(sp, use_parallel) # Set up loop i = 0 error = tol + 1 m, n = len(sp.w_grid), len(sp.π_grid) # Initialize w w = np.ones_like(sp.π_grid) while i < max_iter and error > tol: w_new = Q(w) error = np.max(np.abs(w - w_new)) i += 1 if verbose and i % print_skip == 0: print(f"Error at iteration {i} is {error}.") w = w_new if i == max_iter: print("Failed to converge!") if verbose and i < max_iter: print(f"\nConverged in {i} iterations.") return w_new
The solution can be plotted as follows
sp = SearchProblem() w_bar = solve_wbar(sp) fig, ax = plt.subplots(figsize=(9, 7)) ax.plot(sp.π_grid, w_bar, color='k') ax.fill_between(sp.π_grid, 0, w_bar, color='blue', alpha=0.15) ax.fill_between(sp.π_grid, w_bar, sp.w_max, color='green', alpha=0.15) ax.text(0.5, 0.6, 'reject') ax.text(0.7, 0.9, 'accept') ax.set(xlabel='$\pi$', ylabel='$w$') ax.grid() plt.show()
Error at iteration 5 is 0.020933617848546304. Error at iteration 10 is 0.006441405950027512. Error at iteration 15 is 0.0014972576066766274. Error at iteration 20 is 0.00032556310267861654. Converged in 24 iterations.
Appendix¶
The next piece of code is just a fun simulation to see what the effect of a change in the underlying distribution on the unemployment rate is.
At a point in the simulation, the distribution becomes significantly worse.
It takes a while for agents to learn this, and in the meantime, they are too optimistic and turn down too many jobs.
As a result, the unemployment rate spikes
F_a, F_b, G_a, G_b = 1, 1, 3, 1.2 sp = SearchProblem(F_a=F_a, F_b=F_b, G_a=G_a, G_b=G_b) f, g = sp.f, sp.g # Solve for reservation wage w_bar = solve_wbar(sp, verbose=False) # Interpolate reservation wage function π_grid = sp.π_grid w_func = njit(lambda x: interp(π_grid, w_bar, x)) @njit def update(a, b, e, π): "Update e and π by drawing wage offer from beta distribution with parameters a and b" if e == False: w = np.random.beta(a, b) # Draw random wage if w >= w_func(π): e = True # Take new job else: π = 1 / (1 + ((1 - π) * g(w)) / (π * f(w))) return e, π @njit def simulate_path(F_a=F_a, F_b=F_b, G_a=G_a, G_b=G_b, N=5000, # Number of agents T=600, # Simulation length d=200, # Change date s=0.025): # Separation rate """Simulates path of employment for N number of works over T periods""" e = np.ones((N, T+1)) π = np.ones((N, T+1)) * 1e-3 a, b = G_a, G_b # Initial distribution parameters for t in range(T+1): if t == d: a, b = F_a, F_b # Change distribution parameters # Update each agent for n in range(N): if e[n, t] == 1: # If agent is currently employment p = np.random.uniform(0, 1) if p <= s: # Randomly separate with probability s e[n, t] = 0 new_e, new_π = update(a, b, e[n, t], π[n, t]) e[n, t+1] = new_e π[n, t+1] = new_π return e[:, 1:] d = 200 # Change distribution at time d unemployment_rate = 1 - simulate_path(d=d).mean(axis=0) fig, ax = plt.subplots(figsize=(10, 6)) ax.plot(unemployment_rate) ax.axvline(d, color='r', alpha=0.6, label='Change date') ax.set_xlabel('Time') ax.set_title('Unemployment rate') ax.legend() plt.show()
|
https://lectures.quantecon.org/py/odu.html
|
CC-MAIN-2019-35
|
refinedweb
| 2,216
| 55.54
|
Eclipse Community Forums - RDF feed Eclipse Community Forums Tips on How to Integrate Felix GoGo, SSH, dan Gemini Blueprint to Eclipse Virgo <![CDATA[Glyn Normington suggested me to create this thread so other Virgo users would benefit this information. I blogged a tutorial on how to integrate Felix GoGo, SSH, and OSGi Blueprint (Gemini Blueprint, but working instructions for Apache Aries also available) with Eclipse Virgo Web Server here: I also created a sample PDE plug-in project openly accessible in GitHub: In my sample case Gemini Blueprint 1.0.0.M1 ran without problems in Virgo 3.0.0.M5! Yay! So do you want Blueprint support in Virgo ? If so, please comment/vote on and maybe you can help too by testing/developing/etc. Though particularly for my sample case, Apache Karaf won big (not to brag the competition, but hoping it would be a constructive feedback to Virgo team) due to well-integrated Felix GoGo, SSH server, Apache Aries as OSGi Blueprint implementation, and also good logging that can be inspected right inside console/SSH (Karaf shell commands like log:tail). I hope Virgo will also integrate these features, to which Borislav has confirmed that Virgo 3.0 will release with Felix GoGo + SSH integration! (oh yeah!) Actually, installing Gemini is much less troublesome than Aries because I only had to put 3 bundles (io, core, extender, pretty intuitive) without no missing external dependency. For Aries Blueprint, I needed 3 Aries bundles (aries.blueprint, aries.proxy, and aries.util), the latter two are not so intuitive to infer that they're blueprint's dependencies but perhaps it's because Aries modules (as a whole) are more cohesive than Gemini subprojects. Then Aries needed 3 more JARs: asm, asm.tree, asm.commons. I just happen to have those (SpringSource EBR version) lying around so I'm a bit lucky. There is a but... On other project I use Blueprint "Compendium", it seems like Aries' "proprietary" extension on OSGi Blueprint, specifically these namespaces: If Gemini Blueprint can support those in the future, or better yet: standardize them, it'd be great. Overall, Gemini Blueprint on Virgo works great, just as advertised, and I have yet to hit any edge cases.]]> Hendy Irawan 2011-06-16T16:12:51-00:00 Re: Tips on How to Integrate Felix GoGo, SSH, dan Gemini Blueprint to Eclipse Virgo <![CDATA[Hi Hendy Thanks for "surfacing" the discussion in bug 317943. As promised, your blog is now linked from the Virgo home page. We need Gemini Blueprint to release before we can consume it in a Virgo release. I understand the first release of Gemini Blueprint is not too far away, so that will free us up to upgrade Spring DM 1.2.1 to Gemini Blueprint (which includes Spring DM 2.0 function). There may turn out to be a fair amount of work to do because of the close integration of the kernel with Spring DM. On the other hand, it could be a relatively easy drop-in replacement. Meanwhile, running Blueprint alongside Spring DM is a good way of running blueprint apps in Virgo. As for Apache Karaf, yes there are several nice features that Virgo could learn from. We already have some bugzillas on the books which mention Karaf, but I would be delighted if you raised enhancement bugs for other Karaf capabilities that Virgo is missing. Feel free to send us patches too. As for the standardisation of custom namespaces in Blueprint, this is covered by OSGi RFC 155. Although this spec does not appear in the recent OSGi draft spec, it is referred to, and pre-req'd, by the draft specs for Blueprint Declarative Transactions and Blueprint Bean Interceptors. Regards, Glyn]]> Glyn Normington 2011-06-17T08:32:00-00:00
|
http://www.eclipse.org/forums/feed.php?mode=m&th=214002&basic=1
|
CC-MAIN-2016-36
|
refinedweb
| 632
| 61.87
|
What you need to know about Storybook 3.0
Getting up to speed with the new React Storybook
Today marked an important milestone in the Storybook project: the release of version 3.0. It is the first release by the new community-lead organization formed around the project.
In this post I’ll cover what you need to know to get up to speed on Storybook 3.0. That includes a little history, what 3.0 signifies, and where the project is going.
What is Storybook?
Storybook is a Component Explorer, or “Component Development Environment”. This means that it is a tool that allows you to build your apps one component at a time. The idea is that you define “stories” or states of your component and the tool allows you to visualize and interact with that single state in isolation.
storiesOf('Task') .add('inbox task', () => ( <Task task={{ title: "Test Task", subtitle: "on TestBoard", state: "TASK_INBOX", }} /> ));
We at Chroma have written extensively about the benefits of this style of development, and you can read more on our blog, but suffice to say many developers, including cutting edge teams at Airbnb, Coursera and Slack are using Storybook to level up their front end development.
Storybook is also known as “React Storybook” because it works with React and React Native components. However this may change soon! (see the future section below).
What has changed about Storybook?
In the introduction I mentioned that the new release is a key milestone for Storybook because it marks the transition to a community maintenance model.
The story (pun intended) here is that Storybook was first developed by the team at Kadira, a Sri Lankan startup well known in the Meteor community for the eponymous application monitoring tool. Kadira was led by the prolific and inspiring Arunoda Sisiripala, and after recognising the benefits of React and component isolation early on, built a thriving and growing community around the tool.
However, due to personal and professional reasons, Kadira disbanded at the start of the year. With the core maintainers pursuing other projects a quiet period followed.
In the meantime, the Storybooks continued to pick up steam and adoption in teams around the world. It wasn’t long until a GitHub issue was opened: “Is this project dead?” which prompted a flurry of volunteers to help with maintenance. A team formed, led by Norbert de Langen and Michael Shilman, to gracefully transition to community maintenance. (We at Chroma are excited to help out as part of the maintenance team too!)
You can read more about the history in Michael’s post on the subject.
What’s community maintenance all about?
A lot of the effort in the month or two since then has focussed on laying the foundations for future community-led maintenance. This has included:
- Bringing all the various parts of Storybook into a single repository.
- Updating dependencies and tests, alongside CI integration.
- Setting up contribution guidelines for things like issue creation and triage.
- Updating the documentation site, alongside a logo refresh, and asset consolidation.
- Started an open collective to crowdsource expenses for the project.
These changes should clear the way for more and more community involvement in the future. From making it easier to contribute and verify changes, report and solve bugs, or document and promote the project, we’ve taken the lessons of other community open source projects and applied them to Storybook.
So what’s in version 3?
The release of version three first and foremost is a consolidation of the efforts above, most visibly in the change from the
@kadira npm namespace to
@storybook. To install Storybook, you now use the
@storybook/cli package, which installs the
@storybook/react or
@storybook/react-native package. (The release includes a codemod to migrate existing projects easily).
This version does ship with some new features though, primarily migration to Webpack 2, which means Storybook is up-to-date with the latest and greatest in build tool technology.
The release also comes with better support for Create React Native App apps and the ability to customise your Storyshot tests to achieve a lot of new use-cases.
What’s coming next?
Now that all those organizational changes are out of the way, what is coming next?
We’ve just published a Roadmap with a summary of a lot of changes we are thinking of, but primary amongst them is support for other view layers.
I’m very excited about this! The architecture of Storybook was designed to support different rendering frameworks from the beginning (thus the support for both React and React Native). It’ll take a lot of work, but a unified UI for developing Angular, VueJS and other components seems like a great way to combine the efforts of all communities to supercharge the experience of all frontend developers.
How do I try it?
If you are trying storybook for the first time, just:
npm install -g @storybook/cli cd my-react-or-react-native-project getstorybook
In fact the above commands will also work if you are upgrading an existing project to version 3! (make sure you install the new CLI, and you’ll need to upgrade your project to use Webpack 2 at the same time).
Give it a try, and come get involved in our burgeoning community!
For more articles on Storybook and Component-Driven Development, sign up to our weekly mailing list.
|
https://www.chromatic.com/blog/what-you-need-to-know-about-storybook-3-0/
|
CC-MAIN-2020-45
|
refinedweb
| 900
| 54.52
|
tag:blogger.com,1999:blog-91306187034667053442017-03-05T21:45:15.229-05:00Sense to Shop!How to make sense (and more than just cents!) out of mystery shopping.MysteryShopMom and return shops: take 'em or leave 'em?Sometimes shops require you to make a purchase and return it within a specified time frame in order to get credit for the shop. Other shops require an expensive (or maybe not so expensive) purchase with the option to return. I see pros and cons to this type of shop. In my opinion-<br />Pros:<br />You have no OOP expense and receive the shop fee.<br />You <i>may</i> get paid more and/or a bonus as these are often less desirable shops.<br /><br />Cons:<br />You are often required to write a 2 part report- 1 part for the purchase and 1 part for the return=more work!<br />You may be required to do the return the following day- more time, gas, etc.<br />While it is nice to have no OOP expense, you may be missing out on a "freebie."<br /><br />Pro/Con??: You may be required to do a little "acting" of a scenario when you do the return (i.e. the reason you did not want the item).<br /><br />I'll be honest. I did one, yes, ONE, purchase and return and have not done another one. For one, I hate doing returns, in general. Also, I felt like I was not completely believable when making up the excuse as to why I was returning the merchandise. The young salesman seemed so excited to have made the sale and I felt horrible returning it. Also, I was actually tempted to keep the very expensive merchandise (it was an optional return shops). In hindsight, I'm so glad that I did not keep it. The shop fee would not have come anywhere close to covering the cost. The report was more detailed than my non-return shops, therefore requiring more time to do the report. I do non-return shops for another company where I have no OOP expense and get paid the same, so for me, it's hard to justify spending so much time on a purchase and return shop. These are just my thoughts. What do you think?<img src="" height="1" width="1" alt=""/>MysteryShopMom Review- Tell Us About UsI have done many shops for <a href="">Tell Us About Us</a> and have had wonderful experiences all the way around. To shop for them, you take a quiz specific to the client to be shopped and then you either self-assign or apply for the shop, depending on the client. They give fast feedback and pay quickly. The shops are very straightforward and the forms are simple. They don't have a large variety of clients but the ones that they do have are very fun shops, in my opinion. The shops I have done are mostly reimbursement but because of who the clients are, I personally think it is worth it. I have also contacted the schedulers and gotten prompt and helpful responses.<img src="" height="1" width="1" alt=""/>MysteryShopMom scoring by companies<span class="Apple-style-span" style="font-size: x-small;"><span class="Apple-style-span" style="color: #351c75;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"> <span class="Apple-style-span" style="font-size: small;">The issue of shop scoring is a mystery to many shoppers (this one included). The consensus is that many companies grade you in some way. Some do not tell you your score or provide feedback. Others give feedback within a couple of days or even hours of the shop being submitted. Scores are generally based on the timeliness of the report, the clarity of the report, whether the specific shop instructions were followed, and grammar within the report. I always enjoy getting quick feedback letting me know what I did well and any constructive feedback if necessary. Some companies specify that a shopper must keep a feedback of "x" to continue to request shops or "x" to self-assign. Other companies never mention feedback. Do you enjoy getting feedback or think that some scores or arbitrary?</span></span></span></span><img src="" height="1" width="1" alt=""/>MysteryShopMom questions: What resources to do you need to mystery shop?One of the things I have loved about mystery shopping is that I have been able to make money without much of an initial financial investment. With that said, there are a few things you will need to get you started as a mystery shopper.<div><br /></div><div>The intangible:</div><div><br /></div><div><i>Time</i>- You need to schedule a portion of each day dedicated to finding and scheduling shops. This may involve checking forums like volition for feedback on companies, checking your email, and checking job boards. Of course you also have to carve out time to get to the shop location, complete the shop, and write the report.</div><div><br /></div><div><i>Good grammar skills-</i> Keep a grammar handbook handy or bookmark a website with grammar tips. Some editors have been known to return reports or reject them due to poor grammar and spelling. Don't let careless errors cost you money!</div><div><br /></div><div>The basics:</div><div><br /></div><div><i>Email/Internet- </i>I can't imagine how you can be a successful mystery shopper these days without access to email and the internet. Companies communicate primarily and, some solely through, email.</div><div><br /></div><div><i>A watch- </i>3 words for mystery shopping: timing, timing, timing! Almost every shop will require you to record, at least, the time you are in and out of a location. Most shops require many timings in between your entrance and exit. Though cell phones with clocks/timers may work for this in some instances, I have had some shop instructions specify NOT to use a cellphone for timing because it may appear to the client that you are distracted (talking on the phone, texting, etc).</div><div><br /></div><div><i>Camera- </i>Many shops require specific photos to be taken. Again, a phone with a camera may work for some shops but others do specify that a camera with a higher resolution is needed.</div><div><br /></div><div><i>Printer/Fax/Scanner- </i>While I can see how it would be possible to complete assignments without a printer/fax/scanner, I believe having one make life much easier! You may want to print out your shop notes to take with you on an assignment and some companies to require certain documents to be printed and sent back signed. Some companies allow you to mail in receipts but most prefer, and some require, that they be scanned in and emailed back. I have not yet had to use a fax because I use the scan method but I know that some people do not have a scanner and choose to use fax instead.</div><div><br /></div><div><i>Recording equipment- </i>This is only required IF you choose to do a shop that requires recording. I am not very knowledgeable about this type of equipment as I have not done any shops that require it. I do know that some companies allow you to rent the equipment from them for the shop and return it.</div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><img src="" height="1" width="1" alt=""/>MysteryShopMom MoneySavingMom readers!If you have linked over from Moneysavingmom.com, you will probably want to start at the very <a href="">beginning</a>. Thank you for coming over. I hope that you'll find my blog helpful. Please post comments or questions that you may have about mystery shopping!<img src="" height="1" width="1" alt=""/>MysteryShopMom is it NOT worth your time?I have gotten many questions and comments about mystery shopping being worth your time vs. just not worth it for the money. This issue is going to differ from one individual to another. This is when you have to decide just how much your time is worth and what you want your personal return on investment (your time, gas, mileage on your vehicle) to be. One simple way to do this is to decide what you believe your time is worth per hour to decide if a shop is worth it to you. If you decide your time is worth $10 an hour and a shop is going to pay $6 shop fee with a $6 reimbursement you see that the "pay" is $12. But you have to factor in how far you will be commuting for the shop (mileage, gas, tolls- which most companies do NOT reimburse) as well as your time to complete the shop and the shop forms. For some of us a free meal is worth a 30 minute report. For others reimbursement alone with minimal to no fee will simply not be worth their time. It is a personal decision but you should decide in some capacity exactly what your time is worth so that you are making the most of your time.<img src="" height="1" width="1" alt=""/>MysteryShopMom Review: BestmarkI had heard good things about Bestmark but had not done any shops for them until recently when they called to ask if I would take some shops for them. Since they do primarily automotive shops in my area and my vehicles were due for the services they needed to shop, I agreed to take the shops. Two of the shops were reimbursement for the service only and one of the shops had a nice bonus. Even though two of the shops were reimbursement only, I thought it was definitely worth taking them because they were services I would have to pay for out of pocket otherwise. Their shop instructions were straightforward and feedback was fast. They pay according to their payment schedule. They have plenty of shops in my area that are available to self-assign on their website at any given time. Overall shopping for Bestmark was a good experience and I will do it again.<img src="" height="1" width="1" alt=""/>MysteryShopMom question: Waiting for a bonus? A reader posed the question to me: "Should I sign up for shops as soon as they are posted online or hold out and wait to see if there is a bonus?"<br /> There is not a "simple" answer to this question as it varies with the company and your personal preference of how badly you want that shop. If it is a shop that is posted every month and especially if there is a rotation requirement that prevents you from doing it every month anyway then, sure, I'd wait it out and see what happens. Corporate Research International is one company that is notorious for posting a shop for very low pay at the beginning of the month but increasing it to moderate to even high pay by the end of the month.<br /> I had been taking a particular restaurant shop for Marketforce once a month but didn't pick it up this month simply because I was a little tired of the food at that restaurant :) But when they called me and said they would double the pay- I didn't think twice! So I've now learned I probably won't jump on this shop at the beginning of the month but will wait it out for better pay.<br /> With that said, if it is a shop you do not see posted often or is especially high paying, you might want to go ahead and jump on it before someone else does. There is nothing like the sinking feeling of knowing you could have had a great shop with good pay but you waited too long and it was taken.<br />I hope this helps. If anyone else has any suggestions, feel free to chime in the discussion!<img src="" height="1" width="1" alt=""/>MysteryShopMom review- Focus on Service<a href="">Focus on Service</a> is a company that shops a few casual dining restaurants. From what I can tell, they have a limited number of clients but the company is a great one with which to work. One thing I appreciate about this company is they actually have a scheduler call the shopper before the shop to quickly go over the shop and see if there are any questions. This is the only company I've ever had do this. They also send out multiple email reminders about the shop to make sure the shopper isn't going to forget. The evaluation form is straightforward and relatively simple. I received my reimbursement check from this company in less than 30 days. To obtain shops with this company, you sign up on their website and they'll email you when they have shop in your area. With this company, you are going to want to be selective about which shops you accept because (at least with the client I shopped) you are only allowed 2 shops per household per year. This is a company I had a great experience with and highly recommend.<img src="" height="1" width="1" alt=""/>MysteryShopMom review- Mystery Guest I am thrilled to be writing a positive review for <a href="">Mystery Guest</a> as they have had some of my favorite shops! I was waiting to get my payments from them to see how long it took them to pay before I wrote my review as I had heard some comments about them not paying very fast. I got my payments in a very timely manner, however, so I have no complaints. The shops are varied and though I do not see shops posted often, I really, really enjoy the shops that they do post. The shop instructions are clear and simple as are the evaluation forms. In my experience, they have not required a lot of detailed narrative and they do not seem to have overly picky editors as long as you follow their instructions and give the information they ask of you. The schedulers have also been great. I had one instance arise where I could not complete a shop on my scheduled date. I called and spoke with a scheduler who easily allowed me to choose a different shop date, no problem. This may be my new favorite company. I'm looking forward to my upcoming shops with them!<img src="" height="1" width="1" alt=""/>MysteryShopMom review- Market Force Information One thing I really appreciate about <a href="">Market Force</a> is that they constantly post new jobs. It is a site that I check at least a couple of times throughout the day if I am able. They have a variety of jobs in my area and the evaluation forms have been simple and straightforward in my experience. This company does not give any feedback on jobs as far as I have been able to tell. This company will offer bonuses for jobs that stay vacant long enough. I also see the many of the same shops pop up for multiple dates in a month so this company has plenty from which to choose. They have been great about paying when they say they will and I have had no complaints with this company. I plan to continue to shop for them.<img src="" height="1" width="1" alt=""/>MysteryShopMom review- National Shopping Service<a href="">National Shopping Service</a> does not seem to have as many shops in my area as some other companies but they have been good to work with. Their pay is fair (in my experience it has been more reimbursement than commission) and they pay when they say they will (the last day of the month). Their schedulers have been good to work with. I have spoken with them over the phone and corresponded by email. They do not send an email for every shop available but they will send one when it is not getting filled. So this is a site to check often if they have shops you want to grab.<img src="" height="1" width="1" alt=""/>MysteryShopMom shopping + Family= ??My goal for mystery shopping is to add supplemental income without taking a substantial amount of time away from my family. For me, this is working well so far. I usually search for assignments, take quizzes to become eligible for shops, and fill out reports while the kids are asleep and while sitting on the couch with my husband. Additionally, I primarily do "family friendly" shops where I can take the husband and kids along. Of course, some clients specify NO children are allowed to shop with you on a particular shop. Sometimes these may be in a mall where your husband could take the children in another store with him or sometimes you may want to get a sitter for the afternoon. I've actually had several shops that have come up that required me to have children of a certain age with me, so you never know what you might find. At this point for me, I am enjoying "family friendly" shops and do not find that it is taking time away from my family time at all. Anyone else have a comment on this issue?<img src="" height="1" width="1" alt=""/>MysteryShopMom review- Corporate Research InternationalI have done several shops for CRI. The positives about this company are:1) Their fast and punctual payment schedule, 2) The ability the shopper has to self-assign all shops, 3) The variety of shops, 4) Straightforward feedback forms.<br />Probably the biggest negative about this company is the fairly low pay. The interesting thing about CRI is that the shop payments rise throughout the month. So, if you see a shop listed for a very low-paying amount, if you wait it out, you will likely see the pay rise at least a few dollars. In my experience, CRI does the same shops regularly (I see several of the same shop pop up monthly) so if you miss out once, you are likely to see it again later.<br />When you sign up for this company, you must watch a video and take a quiz to become an auditor. You must also watch videos and pass quizzes for each client you choose to do shops for. That may seem annoying at first but after that you are able to self-assign shops instead of applying and waiting to hear if you receive the shop or not.<br />CRI gives you feedback on your reports through a rating system. As long as you keep the minimum rating you can self-assign shops. If you have an even higher rating you can accept more shops at one time.<br />In my experience CRI does NOT email you about specific shop availability, though they do email about commission rises in your area (for shops that may or may not still be available), and occasionally will email about a client that has shops available (though they may not be in your area). So, you need to login frequently to see what shops are available.<br />Overall, taken into consideration the points I've posted above, I think this is a good company and plan to stick with them.<img src="" height="1" width="1" alt=""/>MysteryShopMom review- Feedback PlusI have done several shops for <a href="">Feedback Plus</a> now and have been very pleased. Their expectations are very clear and in my experience the feedback forms have been relatively simple compared to other companies. They pay very timely twice a month and I find their pay to be on the upper end compared to companies that offer similar shops. They offer a variety of shops in my area and I like that they have the option to self-assign many of them. Overall, I highly recommend them and will continue to do shops for them in the future. Signing up with them is easy and, of course, free!<img src="" height="1" width="1" alt=""/>MysteryShopMom shopping terminology<span class="Apple-style-span" style="font-family: Arial, Helvetica, sans-serif;"><a href="">MSPA</a>- Mystery shopping providers association. T</span><span class="Apple-style-span" style="font-family: Arial, Helvetica, sans-serif;">he MSPA is the largest professional trade association dedicated to improving service quality using anonymous resources.</span>;">Observation- This word is sometimes used to refer to the "shop" or "job" that you will do.</span><br /><span class="Apple-style-span" style="font-family: Arial, Helvetica, sans-serif;"><br /></span><br /><span class="Apple-style-span" style="font-family: Arial, Helvetica, sans-serif;">ICA- The Independent Contractors Agreement- A legal agreement that you must sign in order to perform mystery shops.</span><img src="" height="1" width="1" alt=""/>MysteryShopMom IS mystery shopping anyway? And some common myths about mystery shopping.<div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"></div><div class="post-body entry-content" style="color: #5421bb; font-family: Georgia, Utopia, 'Palatino Linotype', Palatino, serif; font-size: 13px; line-height: 1.4; position: relative; width: 520px;">What IS mystery shopping?</div><div class="post-body entry-content" style="color: #5421bb; font-family: Georgia, Utopia, 'Palatino Linotype', Palatino, serif; font-size: 13px; line-height: 1.4; position: relative; width: 520px;">Mystery shopping is a means for a corporation to measure employee integrity and competence. Mystery shoppers are given specific tasks to do while posing as a "normal customer." These tasks may include timing specific interactions, asking questions, purchasing a certain product, and much more. The mystery shopper then gives their feedback about their experiences, usually by way of survey.</div><div class="post-body entry-content" style="color: #5421bb; font-family: Georgia, Utopia, 'Palatino Linotype', Palatino, serif; font-size: 13px; line-height: 1.4; position: relative; width: 520px;">Some myths:</div><div class="post-body entry-content" style="color: #5421bb; font-family: Georgia, Utopia, 'Palatino Linotype', Palatino, serif; font-size: 13px; line-height: 1.4; position: relative; width: 520px;"><br /></div><div class="post-body entry-content" style="color: #5421bb; font-family: Georgia, Utopia, 'Palatino Linotype', Palatino, serif; font-size: 13px; line-height: 1.4; position: relative; width: 520px;"><div class="post-body entry-content" style="color: #5421bb; font-family: Georgia, Utopia, 'Palatino Linotype', Palatino, serif; font-size: 13px; line-height: 1.4; position: relative; width: 520px;">Myth #1- Mystery shopping is a scam.</div><div class="post-body entry-content" style="color: #5421bb; font-family: Georgia, Utopia, 'Palatino Linotype', Palatino, serif; font-size: 13px; line-height: 1.4; position: relative; width: 520px;">Truth- There are a LOT of scams out there from people trying to get others to pay to mystery shop. The key is you should NEVER pay a company to LET you shop for them. With that said, I do shops where I receive reimbursement. For example, the company says they will pay me $10 to do the shop plus $25 reimbursement for a meal at a certain restaurant. I am NOT paying the mystery shopping company. I'm paying my bill as usual at the restaurant, sending in my receipt, and the company is reimbursing me. The best way to know if a company is real is to see if they are registered with the MSPA. You can also find ratings with the BBB. Unfortunately, some of the scammers have now started using the names of real, legitimate companies. All of the companies I shop for have warnings on their website describing what these scammers are doing in their name. But as long as you use the rule of thumb of not paying to shop and not doing something like cashing a $1000 check that someone sends you or something like that you should be okay.</div><div class="post-body entry-content" style="color: #5421bb; font-family: Georgia, Utopia, 'Palatino Linotype', Palatino, serif; font-size: 13px; line-height: 1.4; position: relative; width: 520px;"><br />Myth #2- If I become a mystery shopper, I can expect to get paid and get free stuff without doing much work.<br />Truth- Actually, mystery shopping does involve a good amount of work both on the front end as well as after the shop is completed. This work includes tasks such as: watching instructional presentations, taking quizzes to assess your competence about the shop, reading and printing material before the shop, making detailed observations and performing specific tasks during the shop, and submitting quality feedback on the shop.<br /><br />Myth #3- I signed up with a mystery shopping company and they haven't sent me any information on shops/shops that interest me. I should just give up.<br />Truth- The key is to sign up with MANY and VARIED shopping companies. Some companies specialize in a particular type of establishment that may or may not interest you. Also, some but not all companies send out email notifications of shops. You may need to check the job postings to find a shop for you.<br /><br />Myth #4- I have applied to the same shop with the same company five times and have never been accepted. I should give up on mystery shopping.<br />Truth- You may not always get your first pick of shop! Sure, we'd all like a free meal at our favorite restaurant but you may need to complete some other shops and build up your shopper rating first. Apply to different shops with different companies and you are sure to be accepted for some type of job!<br /><br />...these are just a few of the myths that jump to mind right now. I will add more as I think of them or asked questions.</div><div><br /></div></div><div class="post-body entry-content" style="color: #5421bb; font-family: Georgia, Utopia, 'Palatino Linotype', Palatino, serif; font-size: 13px; line-height: 1.4; position: relative; width: 520px;"><div style="clear: both;"></div></div><img src="" height="1" width="1" alt=""/>MysteryShopMom
|
http://feeds.feedburner.com/SenseToShop
|
CC-MAIN-2017-34
|
refinedweb
| 4,388
| 61.46
|
Let's explore how React helps us add interactivity with state and event handlers.
As an example, let’s create a like button inside your
HomePage component. First, add a button element inside the
return() statement:
function HomePage() { const names = ['Ada Lovelace', 'Grace Hopper', 'Margaret Hamilton']; return ( <div> <Header title="Develop. Preview. Ship. 🚀" /> <ul> {names.map((name) => ( <li key={name}>{name}</li> ))} </ul> <button>Like</button> </div> ); }
To make the button do something when clicked, you can make use of the
onClick event.
function HomePage() { // ... return ( <div> {/* ... */} <button onClick={}>Like</button> </div> ); }
In React, event names are camelCased. The
onClick event is one of many possible events you can use to respond to user interaction. For example, you can use
onChange for input fields or
onSubmit for forms.
You can define a function to "handle" events whenever they are triggered. Create a function before the return statement called
handleClick():
function HomePage() { // ... function handleClick() { console.log("increment like count") } return ( <div> {/* ... */} <button onClick={}>Like</button> </div> ) }
Then, you can call the
handleClick function when the
onClick event is triggered:
function HomePage() { // ... function handleClick() { console.log('increment like count'); } return ( <div> {/* ... */} <button onClick={handleClick}>Like</button> </div> ); }
React has a set of functions called hooks. Hooks allow you to add additional logic such as state to your components. You can think of state as any information in your UI that changes over time, usually triggered by user interaction.
You can use state to store and increment the number of times a user has clicked the like button. In fact, this is what the React hook to manage state is called:
useState()
function HomePage() { React.useState(); }
useState() returns an array, and you can access and use those array values inside your component using array destructuring:
function HomePage() { const [] = React.useState(); // ... }
The first item in the array is the state
value, which you can name anything. It’s recommended to name it something descriptive:
function HomePage() { const [likes] = React.useState(); // ... }
The second item in the array is a function to
update the value. You can name the update function anything, but it's common to prefix it with
set followed by the name of the state variable you’re updating:
function HomePage() { const [likes, setLikes] = React.useState(); // ... }
You can also take the opportunity to add the initial value of your
likes state: zero.
function HomePage() { const [likes, setLikes] = React.useState(0); }
Then, you can check the initial state is working by using the state variable inside your component.
function HomePage() { // ... const [likes, setLikes] = React.useState(0); return ( // ... <button onClick={handleClick}>Like({likes})</button> ); }
Finally, you can call your state updater function,
setLikes in your
HomePage component, let's add it inside the
handleClick() function you previously defined:
function HomePage() { // ... const [likes, setLikes] = React.useState(0); function handleClick() { setLikes(likes + 1); } return ( <div> {/* ... */} <button onClick={handleClick}>Likes ({likes})</button> </div> ); }
Clicking the button will now call the
handleClick function, which calls the
setLikes state updater function with a single argument of the current number of likes + 1.
Note: Unlike props which are passed to components as the first function parameter, the state is initiated and stored within a component. You can pass the state information to children components as props, but the logic for updating the state should be kept within the component where state was initially created.
This was only an introduction to state, and there’s more you can learn about managing state and data flow in your React applications. To learn more, we recommend you go through the Adding Interactivity and Managing State sections in the React Documentation.
Additional Resources:
|
https://nextjs.org/learn/foundations/from-javascript-to-react/adding-interactivity-with-state
|
CC-MAIN-2022-40
|
refinedweb
| 598
| 55.84
|
Hi Monks,
First (registered) post. This is a bit of a long one--I was thinking things through while typing it up, which I hope helps you.
I have a Mac running 10.6.8. I have the system perl /usr/bin/perl (5.10.0), and also a MacPorts perl /opt/local/bin/perl (5.14.2), the latter is the first in my $PATH. Until recently I had used CPAN to install non-MacPorts modules into the MacPorts tree /opt/local/lib/perl5/site_perl/. That was OK, but I was told that it was a bad idea to have things in the MacPorts tree that MacPorts doesn't know about, so I recently deleted everything there and reinstalled in my home dir /Users/derek/local/lib/perl5. I put export PERL5LIB="/Users/derek/local/lib/perl5" into my ~/.profile, added INSTALL_BASE=/Users/derek/local to the config options in cpan (makepl_arg and similarly for mbuildpl_arg) and I thought all was well. (yes, I'm aware of local::lib, but don't have it installed.)
Now I am checking some code in the PDL test harness, and the code does image format conversion by calling some netpbm programs (in this case, 'pnmquant'). Some of these are Perl scripts themselves. They start with #!/usr/bin/perl. And I am getting some failures that I think I understand and some I don't, so I wanted to get some outside advice at this point. For the rest of this post, please remember that I have now undefined the $PERL5LIB environment variable, for clarity. The test image is named img.pnm
When I run from the command line
pnmquant 256 img.pnm
It produces the expected output. But when I set that PERL5LIB variable:
PERL5LIB=/Users/derek/local/lib/perl5 pnmquant 256 img.pnm, I get a Segmentation fault. That is bad, because that line (I think) emulates best the environment I will normally be running under. If I try to run under the debugger, I get
PERL5LIB=/Users/derek/local/lib/perl5 /opt/local/bin/perl -d /opt/local/bin/pnmquant 256 img.pnm success or
PERL5LIB=/Users/derek/local/lib/perl5 /usr/bin/perl -d /opt/local/bin/pnmquant 256 img.pnm failure depending on the perl used. The latter debugger fails immediately with
dyld: Symbol not found: _Perl_xs_apiversion_bootcheck
Referenced from: /Users/derek/local/lib/perl5/darwin-thread-multi-2l
+evel/auto/Term/ReadKey/ReadKey.bundle
Expected in: flat namespace
Trace/BPT trap
[download]
If I instead prepend PERL_DL_NONLAZY=1 to the line, then I get no dyld error and the correct output. This may explain why the code in question does not fail when run under make test (which as you know sets that NONLAZY variable), but does when run under perl -Mblib t/testscript.t.
Question 1: Shouldn't PERL_DL_NONLAZY be helpfully causing these sorts of errors, not preventing them like it seems to be doing?
Question 2: Should my directory structure in ~/local/lib/perl5 have perl-version-specific directories, or is that something I would have had to create manually, or does that not matter in this case?..
... to the top if the script shouldn't mess anything up. It will only have any effect on child processes launched by the script - e.g. things launched using system() or backticks.
Question ...)
Re: Question 1: I understand that the different perl versions are incompatible. But then why does doing PERL5LIB='/Users/derek/local/lib/perl5' pnmquant 256 img.pnm segfault (well, I know why, because that PERL5LIB is for 5.14 and pnmquant uses /usr/bin/perl v5.10) but PERL_DL_NONLAZY=1 PERL5LIB='/Users/derek/local/lib/perl5' pnmquant 256 img.pnm produces the correct result without segfaulting? Seems like PERL_DL_NONLAZY is doing something, but I don't understand what, apparently.
The Netpbm scripts (clarification, just in case: Netpbm is not a Perl module) have /usr/bin/perl hard-coded in their #!, so I don't think changing their install location will affect things.
With both perls in turn, issue the command perl -V (with an uppercase V) and notice the list of library search-paths. This is the current combined list .. the content of @INC .. as taken from all sources including PERL5LIB.
If you are dealing with two different-generation Perls, as here, these lists ought not to intersect at all. You need to build Perl yourself to do this, in many cases, since the last few entries in the list are hard-coded at compile time. That’s my rule-of-thumb opinion only ... the safest and most reliable course, although perhaps the most time-consuming since it studiously starts with".
I don't have any specific answers to your questions, but I do have some relevant suggestions that might help:
I'm not sure why you're setting INSTALL_BASE. Its not necessary when making arbitrary perl installs anyways. Adding the perl you want to use in that session to PATH should be enough.
Its likely that all of these installs you've fiddled with are using the same directory (~/.cpan) to stage and build perl modules. That might be trouble.
You can see how I manage hand built perl installs at Re: 2nd Perl installation on Mac OSX.!
Yes
No
Other opinion (please explain)
Results (99 votes),
past polls
|
http://www.perlmonks.org/index.pl/jacques?node_id=1027260
|
CC-MAIN-2015-40
|
refinedweb
| 885
| 66.64
|
Feb 14, 2018 07:28 AM|z080236|LINK
I could add reference to one of my custom made dlls in ConsoleApp, and Web application(Add reference)
But when I added reference to WebSite, and tried to import namespace, it complains the following:
The type of namespace xxxx could not be found. (are you missing a directive or assembly reference)?
Feb 15, 2018 04:42 AM|KathyW|LINK
Is the Bin folder right under the root of the site? And I have no idea what "xxx" means. I assure you that it works - I have several asp.net websites that are not applications. You have not posted any code so there is not much more anyone can tell you.
Feb 15, 2018 03:59 PM|z080236|LINK
Feb 20, 2018 09:07 AM|z080236|LINK
Feb 24, 2018 07:23 AM|X.Daisy|LINK
Hi z080236,
How about installing it into the global assembly cache?
(Note:Each computer where the common language runtime is installed has this machine-wide code cache. The global assembly cache stores assemblies specifically designated to be shared by several applications on the computer.)
Add the file to GCA
gacutil.exe /i MyFile.dll
Then you can get the version and
gacutil /l MyFile
For more details, please refer to the following articles:
Best Regards,
Daisy
7 replies
Last post Feb 24, 2018 07:23 AM by X.Daisy
|
https://forums.asp.net/t/2136291.aspx?ASP+net+web+site+unable+to+reference+dll
|
CC-MAIN-2021-04
|
refinedweb
| 232
| 73.27
|
Trusted Web Activities (TWAs) can be a bit tricky to set up, especially if all you want to do is display your website. This guide will take you through creating a basic TWA, covering all the gotchas.
By the end of this guide, you will:
- Have built a Trusted Web Activity that passes verification.
- Understand when your debug keys and your release keys are used.
- Be able to determine the signature your TWA is being built with.
- Know how to create a basic Digital Asset Links file.
To follow this guide you'll need:
- Android Studio Installed
- An Android phone or emulator connected and set up for development (Enable USB debugging if you’re using a physical phone).
- A browser that supports Trusted Web Activities on your development phone. Chrome 72 or later will work. Support in other browsers is on its way.
- A website you'd like to view in the Trusted Web Activity.
A Trusted Web Activity lets your Android App launch a full screen Browser Tab without any browser UI. This capability is restricted to websites that you own, and you prove this by setting up Digital Asset Links. Digital Asset Links consist essentially of a file on your website that points to your app and some metadata in your app that points to your website. We'll talk more about them later.
When you launch a Trusted Web Activity, the browser will check that the Digital Asset Links check out, this is called verification. If verification fails, the browser will fall back to displaying your website as a Custom Tab.
Clone and customize the example repo
The svgomg-twa repo contains an example TWA that you can customize to launch your website:
- Clone the project (
git clone).
- Import the Project into Android Studio, using File > New > Import Project, and select the folder to which the project was cloned.
Open the app's
build.gradleand modify the values in
twaManifest. There are two
build.gradlefiles. You want the module one at
app/build.gradle.
- Change
hostNameto point to your website. Your website must be available on HTTPS, though you omit that from the
hostNamefield.
- Change
nameto whatever you want.
- Change
applicationIdto something specific to your project. This translates into the app’s package name and is how the app is identified on the Play Store - no two apps can share the
applicationIdand if you change it you’ll need to create a new Play Store listing.
Build and run
In Android Studio hit Run, Run ‘app’ (where ‘app’ is your module name, if you’ve changed it) and the TWA will be built and run on your device! You’ll notice that your website is launched as a Custom Tab, not a Trusted Web Activity, this is because we haven’t set up our Digital Asset Links yet, but first...
A note on signing keys
Digital Asset Links take into account the key that an APK has been signed with and a common cause for verification failing is to use the wrong signature. (Remember, failing verification means you'll launch your website as a Custom Tab with browser UI at the top of the page.) When you hit Run or Build APK in Android Studio, the APK will be created with your developer debug key, which Android Studio automatically generated for you.
If you deploy your app to the Play Store, you’ll hit Build > Generate Signed APK, which will use a different signature, one that you’ll have created yourself (and protected with a password). That means that if your Digital Asset Links file specifies your production key, verification will fail when you build with your debug key. This also can happen the other way around - if the Digital Asset Links file has your debug key, your TWA will work fine locally, but then when you download the signed version from the Play Store, verification will fail.
You can put both your debug key and production key in your asset link file (see Adding More Keys below), but your debug key is less secure. Anyone who gets a copy of the file can use it. Finally, if you have your app installed on your device with one key, you can’t install the version with the other key. You must uninstall the previous version first.
Building your app
- To build with debug keys:
- Click Run 'app' where 'app' is the name of your module if you changed it.
- To build with release keys:
- Click Build, then Generate Signed APK.
- Choose APK.
- If you're doing this for the first time, on the next page press Create New to create a new key and follow the Android documentation. Otherwise, select your previously created key.
- Press Next and pick the release build variant.
- Make sure you check both the V1 and the V2 signatures (the Play Store won’t let you upload the APK otherwise).
- Click Finish.
If you built with debug keys, your app will be automatically deployed to your device.
On the other hand if you built with release keys, after a few seconds a pop up will appear in the
bottom right corner giving you the option to locate or analyze the APK.
(If you miss it, you can press on the Event Log in the bottom right.)
You’ll need to use adb manually to
install the signed APK with
adb install app-release.apk.
This table shows which key is used based on how you create your APK.
Creating your asset link file
Now that your app is installed (with either the debug or release key), you can generate the Digital Asset Link file. I’ve created the Asset Link Tool to help you do this. If you'd prefer not to download the Asset Link Tool, you can determine your app's signature manually.
- Download the Asset Link Tool.
- When the app launches, you’ll be given a list of all applications installed on your device by
applicationId. Filter the list by the
applicationIdyou chose earlier and click on that entry.
- You’ll see a page listing your app’s signature and with a generated Digital Asset Link. Click on the Copy or Share buttons at the bottom to export it however you like (e.g., save to Google Keep, email it to yourself).
Put the Digital Asset Link in a file called
assetlinks.json and upload it to your website at
.well-known/assetlinks.json (relative to the root).
Play Store Signing
If you opt in to App signing by Google Play, Google manages your app's signing key. There are two ways you can get the correct Digital Asset Link file for a Google managed app signing key:
- With the Asset Link Tool:
- Download your app from the Google Play Store.
- Repeat Creating your asset link file.
- Manually:
- Open the Google Play Console.
- Select your app.
- Choose Release management and then App signing from the panel on the left.
- Copy the SHA-256 certificate fingerprint from under the App signing certificate section.
- Use this value in your Digital Asset Link file.
Ensuring your asset link file is accessible
Now that you’ve uploaded it, make sure you can access your asset link file in a browser.
Check that resolves to the file you just uploaded.
Jekyll based websites
If your website is generated by Jekyll (such as GitHub Pages), you’ll need to add a line of
configuration so that the
.well-known directory is included in the output.
GitHub help has more information on this topic.
Create a file called
_config.yml at the root of your site (or add to it if it already exists) and
enter:
# Folders with dotfiles are ignored by default. include: [.well-known]
Adding more keys
A Digital Asset Link file can contain more than one app, and for each app, it can contain more than
one key.
For example, to add a second key, just use the
Asset Link Tool to
determine the key and add it as a second entry to the
sha256_cert_fingerprints field.
The code in Chrome that parses this JSON is quite strict, so make sure you don’t accidentally add an
extra comma at the end of the list.
[{ "relation": ["delegate_permission/common.handle_all_urls"], "target": { "namespace": "android_app", "package_name": "com.appspot.pwa_directory", "sha256_cert_fingerprints": [ "FA:2A:03:CB:38:9C:F3:BE:28:E3:CA:7F:DA:2E:FA:4F:4A:96:F3:BC:45:2C:08:A2:16:A1:5D:FD:AB:46:BC:9D", "4F:FF:49:FF:C6:1A:22:E3:BB:6F:E6:E1:E6:5B:40:17:55:C0:A9:F9:02:D9:BF:28:38:0B:AE:A7:46:A0:61:8C" ] } }]
Troubleshooting
Viewing relevant logs
Chrome logs the reason that Digital Asset Links verification fails and you can view the logs on an
Android device with
adb logcat.
If you’re developing on Linux/Mac, you can see the relevant logs from a connected device
with:
> adb logcat -v brief | grep -e OriginVerifier -e digital_asset_links
For example, if you see the message
Statement failure matching fingerprint., you should use the
Asset Link Tool to see your app’s signature and make sure it matches that in your
assetlinks.json
file (Be wary of confusing your debug and release keys. Look at the
A note on signing keys section.)
Checking your browser
A Trusted Web Activity will try to adhere to the user’s default choice of browser. If the user’s default browser supports TWAs, it will be launched. Failing that, if any installed browser supports TWAs, it will be chosen. Finally, the default behavior is to fall back to a Custom Tabs mode.
This means that if you’re debugging something to do with Trusted Web Activities, you should make sure you’re using the browser you think that you are. You can use the following command to check which browser is being used:
> adb logcat -v brief | grep -e TWAProviderPicker D/TWAProviderPicker(17168): Found TWA provider, finishing search: com.google.android.apps.chrome
Next Steps
Hopefully, if you’ve followed this guide, you'll have a working Trusted Web Activity and have enough knowledge to debug what's going on when verification fails. If not, please have a look at the Troubleshooting section or file a GitHub issue against these docs.
For your next steps, I’d recommend you start off by creating an icon for your app. With that done, you can consider deploying your app to the Play Store.
RSS or Atom feed and get the latest updates in your favorite feed reader!Subscribe to our
|
https://developers.google.cn/web/updates/2019/08/twas-quickstart?hl=id
|
CC-MAIN-2020-05
|
refinedweb
| 1,765
| 71.65
|
So far in this series we’ve seen a lot of motivation and defined basic ideas of what a quantum circuit is. But on rereading my posts, I think we would all benefit from some concreteness.
“Local” operations
So by now we’ve understood that quantum circuits consist of a sequence of gates
, where each
is an 8-by-8 matrix that operates “locally” on some choice of three (or fewer) qubits. And in your head you imagine starting with some state vector
and applying each
locally to its three qubits until the end when you measure the state and get some classical output.
But the point I want to make is that
actually changes the whole state vector
, because the three qubits it acts “locally” on are part of the entire basis. Here’s an example. Suppose we have three qubits and they’re in the state
Recall we abbreviate basis states by subscripting them by binary strings, so
, and a valid state is any unit vector over the
possible basis elements. As a vector, this state is
Say we apply the gate
that swaps the first and third qubits. “Locally” this gate has the following matrix:
where we index the rows and columns by the relevant strings in lexicographic order: 00, 01, 10, 11. So this operation leaves
and
the same while swapping the other two. However, as an operation on three qubits the operation looks quite different. And it’s sort of hard to describe a general way to write it down as a matrix because of the choice of indices. There are three different perspectives.
Perspective 1: if the qubits being operated on are sequential (like, the third, fourth, and fifth qubits), then we can write the matrix as
where a tensor product of matrices is the Kronecker product and
(the number of qubits adds up). Then the final operation looks like a “tiled product” of identity matrices by
, but it’s a pain to write out. Let me hurt my self for your sake, dear reader.
And each copy of
looks like
That’s a mess, but if you write it out for our example of swapping the first and third qubits of a three-qubit register you get the following:
And this makes sense: the gate changes any entry of the state vector that has values for the first and third qubit that are different. This is what happens to our state:
Perspective 2: just assume every operation works on the first three qubits, and wrap each operation
in between an operation that swaps the first three qubits with the desired three. So like
for
a swap operation. Then the matrix form looks a bit simpler, and it just means we permute the columns of the matrix form we gave above so that it just has the form
. This allows one to retain a shred of sanity when trying to envision the matrix for an operation that acts on three qubits that are not sequential. The downside is that to actually use this perspective in an analysis you have to carry around the extra baggage of these permutation matrices. So one might use this as a simplifying assumption (a “without loss of generality” statement).
Perspective 3: ignore matrices and write things down in a summation form. So if
is the permutation that swaps 1 and 3 and leaves the other indices unchanged, we can write the general operation on a state
as
.
The third option is probably the nicest way to do things, but it’s important to keep the matrix view in mind for many reasons. Just one quick reason: “errors” in quantum gates (that are meant to approximately compute something) compound linearly in the number of gates because the operations are linear. This is a key reason that allows one to design quantum analogues of error correcting codes.
So we’ve established that the basic (atomic) quantum gates are “local” in the sense that they operate on a fixed number of qubits, but they are not local in the sense that they can screw up the entire state vector.
A side note on the meaning of “local”
When I was chugging through learning this stuff (and I still have far to go), I wanted to come up with an alternate characterization of the word “local” so that I would feel better about using the word “local.” Mathematicians are as passionate about word choice as programmers are about text editors. In particular, for a long time I was ignorantly convinced that quantum gates that act on a small number of qubits don’t affect the marginal distribution of measurement outcomes for other qubits. That is, I thought that if
acts on qubits 1,2,3, then
and
have the same probability of a measurement producing a 1 in index 4, 5, etc, conditioned on fixing a measurement outcome for qubits 1,2,3. In notation, if
is a random variable whose values are binary strings and
is a state vector, I’ll call
the random process of measuring a state vector
and getting a string
, then my claim was that the following was true for every
and every
:
You could try to prove this, and you would fail because it’s false. In fact, it’s even false if
acts on only a single qubit! Because it’s so tedious to write out all of the notation, I decided to write a program to illustrate the counterexample. (The most brazenly dedicated readers will try to prove this false fact and identify where the proof fails.)
import numpy H = (1/(2**0.5)) * numpy.array([[1,1], [1,-1]]) I = numpy.identity(4) A = numpy.kron(H,I)
Here
is the 2 by 2 Hadamard matrix, which operates on a single qubit and maps
, and
. This matrix is famous for many reasons, but one simple use as a quantum gate is to generate uniform random coin flips. In particular, measuring
outputs 1 and 0 with equal probability.
So in the code sample above,
is the mapping which applies the Hadamard operation to the first qubit and leaves the other qubits alone.
Then we compute some arbitrary input state vector
def normalize(z): return (1.0 / (sum(abs(z)**2) ** 0.5)) * z v = numpy.arange(1,9) w = normalize(v)
And now we write a function to compute the probability of some query conditioned on some fixed bits. We simply sum up the square norms of all of the relevant indices in the state vector.
def condProb(state, query={}, fixed={}): num = 0 denom = 0 dim = int(math.log2(len(state))) for x in itertools.product([0,1], repeat=dim): if any(x[index] != b for (index,b) in fixed.items()): continue i = sum(d << i for (i,d) in enumerate(reversed(x))) denom += abs(state[i])**2 if all(x[index] == b for (index, b) in query.items()): num += abs(state[i]) ** 2 if num == 0: return 0 return num / denom
So if the query is
query = {1:0} and the fixed thing is
fixed = {0:0}, then this will compute the probability that the measurement results in the second qubit being zero conditioned on the first qubit also being zero.
And the result:
Aw = A.dot(w) query = {1:0} fixed = {0:0} print((condProb(w, query, fixed), condProb(Aw, query, fixed))) # (0.16666666666666666, 0.29069767441860467)
So they are not equal in general.
Also, in general we won’t work explicitly with full quantum gate matrices, since for
qubits the have size
which is big. But for finding counterexamples to guesses and false intuition, it’s a great tool.
Some important gates on 1-3 qubits
Let’s close this post with concrete examples of quantum gates. Based on the above discussion, we can write out the 2 x 2 or 4 x 4 matrix form of the operation and understand that it can apply to any two qubits in the state of a quantum program. Gates are most interesting when they’re operating on entangled qubits, and that will come out when we visit our first quantum algorithm next time, but for now we will just discuss at a naive level how they operate on the basis vectors.
Hadamard gate:
We introduced the Hadamard gate already, but I’ll reiterate it here.
Let
be the following 2 by 2 matrix, which operates on a single qubit and maps
, and
.
One can use
to generate uniform random coin flips. In particular, measuring
outputs 1 and 0 with equal probability.
Quantum NOT gate:
Let
be the 2 x 2 matrix formed by swapping the columns of the identity matrix.
This gate is often called the “Pauli-X” gate by physicists. This matrix is far too simple to be named after a person, and I can only imagine it is still named after a person for the layer of obfuscation that so often makes people feel smarter (same goes for the Pauli-Y and Pauli-Z gates, but we’ll get to those when we need them).
If we’re thinking of
as the boolean value “false” and
as the boolean value “true”, then the quantum NOT gate simply swaps those two states. In particular, note that composing a Hadamard and a quantum NOT gate can have interesting effects:
, but
. In the second case, the minus sign is the culprit. Which brings us to…
Phase shift gate:
Given an angle
, we can “shift the phase” of one qubit by an angle of
using the 2 x 2 matrix
.
“Phase” is a term physicists like to use for angles. Since the coefficients of a quantum state vector are complex numbers, and since complex numbers can be thought of geometrically as vectors with direction and magnitude, it makes sense to “rotate” the coefficient of a single qubit. So
does nothing to
and it rotates the coefficient of
by an angle of
.
Continuing in our theme of concreteness, if I have the state vector
and I apply a rotation of
to the second qubit, then my operation is the matrix
which maps
and
. That would map the state
to
.
If we instead used the rotation by
we would get the output state
.
Quantum AND/OR gate:
In the last post in this series we gave the quantum AND gate and left the quantum OR gate as an exercise. Rather than write out the matrix again, let me remind you of this gate using a description of the effect on the basis
where
. Recall that we need three qubits in order to make the operation reversible (which is a consequence of all unitary gates being unitary matrices). Some notation:
is the XOR of two bits, and
is AND,
is OR. The quantum AND gate maps
In words, the third coordinate is XORed with the AND of the first two coordinates. We think of the third coordinate as a “scratchwork” qubit which is maybe prepared ahead of time to be in state zero.
Simiarly, the quantum OR gate maps
. As we saw last time these combined with the quantum NOT gate (and some modest number of scratchwork qubits) allows quantum circuits to simulate any classical circuit.
Controlled-* gate:
The last example in this post is a meta-gate that represents a conditional branching. If we’re given a gate
acting on
qubits, then we define the controlled-A to be an operation which acts on
qubits. Let’s call the added qubit “qubit zero.” Then controlled-A does nothing if qubit zero is in state 0, and applies
if qubit zero is in state 1. Qubit zero is generally called the “control qubit.”
The matrix representing this operation decomposes into blocks if the control qubit is actually the first qubit (or you rearrange).
A common example of this is the controlled-NOT gate, often abbreviated CNOT, and it has the matrix
Looking forward
Okay let’s take a step back and evaluate our life choices. So far we’ve spent a few hours of our time motivating quantum computing, explaining the details of qubits and quantum circuits, and seeing examples of concrete quantum gates and studying measurement. I’ve hopefully hammered into your head the notion that quantum states which aren’t pure tensors (i.e. entangled) are where the “weirdness” of quantum computing comes from. But we haven’t seen any examples of quantum algorithms yet!
Next time we’ll see our first example of an algorithm that is genuinely quantum. We won’t tackle factoring yet, but we will see quantum “weirdness” in action.
Until then!
|
https://jeremykun.com/2016/01/11/concrete-examples-of-quantum-gates/?shared=email&msg=fail
|
CC-MAIN-2021-17
|
refinedweb
| 2,098
| 68.4
|
You're 3 steps away from adding great in-app support to your Unity game.
Guide to integrating the Unity plugin for the Helpshift SDK which you can call from your C# and Javascript game scripts.
If you have a project with Unity version prior to 5.3 or Xcode 8, you could use the build mentioned here.
Select your SDK from the following options:
Helpshift SDK .zip folder includes:
We recommend you to upgrade to the latest SDK as mentioned above. However, if you are currently using Unity SDK 4.x and need more details, click here.
In case you want to use the SDK with bitcode support, follow the instructions here
.unitypackagewhich you can import through the Unity package import procedure.
helpshift-plugin-unity-version.unitypackageinto your Unity game:
helpshift-plugin-unity-version.unitypackagefile to import the Helpshift SDK.; . . . public class MyGameControl : MonoBehaviour { private HelpshiftSdk help; ... void Awake() { help = HelpshiftSdk.getInstance(); var configMap = new Dictionary<string, object>(); help.install("<API_KEY>", "<DOMAIN_NAME>", "<APP_ID>", configMap); } ... }hiftSupport /
|
https://developers.helpshift.com/unity/getting-started-ios/
|
CC-MAIN-2019-43
|
refinedweb
| 166
| 61.93
|
import "github.com/golang/go/src/cmd/go/internal/modget"
Package modget implements the module-aware “go get” command.
var CmdGet = &base.Command{ UsageLine: "go get [-d] [-t] [-u] [-v] [-insecure] [build flags] [packages]", Short: "add dependencies to current module and install them", Long: "" /* 5930 byte string literal not displayed */, }
var HelpModuleGet = &base.Command{ UsageLine: "module-get", Short: "module-aware go get", Long: "" /* 275 byte string literal not displayed */+ CmdGet.UsageLine + ` ` + CmdGet.Long, }
Note that this help text is a stopgap to make the module-aware get help text available even in non-module settings. It should be deleted when the old get is deleted. It should NOT be considered to set a precedent of having hierarchical help names with dashes.
Package modget imports 18 packages (graph). Updated 2019-12-06. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/golang/go/src/cmd/go/internal/modget
|
CC-MAIN-2019-51
|
refinedweb
| 137
| 58.18
|
A tiny Python package for easy access to up-to-date Coronavirus (COVID-19, SARS-CoV-2) cases data.
Project description
COVID19Py
A tiny Python package for easy access to up-to-date Coronavirus (COVID-19, SARS-CoV-2) cases data.
About
COVID19Py is a Python wrapper for the ExpDev07/coronavirus-tracker-api REST API. It retrieves data directly from @ExpDev07's backend but it can also be set up to use a different backend.
To achieve this, just pass the URL of the backend as a parameter to the library's constructor:
import COVID19Py covid19 = COVID19Py.COVID19("")
Installation
In order install this package, simply run:
pip install COVID19Py
Usage
To use COVID19Py, you first need to import the package and then create a new instance:
import COVID19Py covid19 = COVID19Py.COVID19()
Choosing a data source
COVID19Py supports the retrieval of data from multiple data sources. To choose a specific data source, simply pass it as a parameter to the library's constructor:
covid19 = COVID19Py.COVID19(data_source="csbs")
For more details about the available data sources, please check the API's documentation.
Getting latest amount of total confirmed cases, deaths, and recoveries:
latest = covid19.getLatest()
Getting all locations:
locations = covid19.getLocations()
or:
locations = covid19.getLocations(timelines=True)
to also get timelines.
You can also rank the results by
confirmed,
deaths or
recovered.
locations = covid19.getLocations(rank_by='recovered')
Getting location by country code:
location = covid19.getLocationByCountryCode("US")
or:
location = covid19.getLocationByCountryCode("US", timelines=True)
to also get timelines.
Getting a specific location (includes timelines by default):
location = covid19.getLocationById(39)
Getting all data at once:
You can also get all the available data with one command.
data = covid19.getAll()
or:
data = covid19.getAll(timelines=True)
to also get timelines.
latest will be available on
data["latest"] and
locations will be available on
data["locations"].
Getting
latest deltas:
When using
getAll(), COVID19Py will also store the previous version of the retrieved data. This allows us to easily see how data changed since the last time we requested them.
changes = covid19.getLatestChanges()
{ "confirmed": 512, "deaths": 16, "recovered": 1024 }
Contributors ✨.
|
https://pypi.org/project/COVID19Py/
|
CC-MAIN-2020-29
|
refinedweb
| 346
| 50.73
|
About The Author
Martin Sikora has been programming professionally since 2006 for companies such as Miton CZ and Symbio Digital in various languages, mostly PHP. Since 2012, he … More about Martin…
Simple Augmented Reality With OpenCV, Three.js, And WebSockets
Augmented reality is generally considered to be very hard to create. However, it’s possible to make visually impressive projects using just open source libraries. In this tutorial, we’ll make use of OpenCV in Python to detect circle-shaped objects in a webcam stream and replace them with 3D Earth in Three.js in a browser window while using WebSockets to join this all together.
We want to strictly separate front-end and back-end in order to make it reusable. In a real-world application we could write the front-end in Unity, Unreal Engine or Blender, for example, to make it look really nice. The browser front-end is the easiest to implement and should work on nearly every possible configuration.
To keep things simple we’ll split the app into three smaller parts:
- Python back-end with OpenCV
OpenCV will read the webcam stream and open multiple windows with camera image after passing it through multiple filters to ease debugging and give us a little insight into what the circle detection algorithm actually sees. Output of this part will be just 2D coordinates and radius of the detected circle.
- JavaScript front-end with Three.js in a browser
Step-by-step implementation of Three.js library to render textured Earth with moon spinning around it. The most interesting thing here will be mapping 2D screen coordinates into the 3D world. We’ll also approximate the coordinates and radius to increase OpenCV’s accuracy.
- WebSockets in both front-end and back-end
Back-end with WebSockets server will periodically send messages with detected circle coordinates and radii to the browser client.
1. Python Back-End With OpenCV
Our first step will be just importing the OpenCV library in Python and opening a window with a live webcam stream.
We’re going to use the newest OpenCV 3.0 (see installation notes) with Python 2.7. Please note, that installation on some systems might be problematic and the official documentation isn’t very helpful. I tried myself on Mac OS X version 3.0 from MacPorts and the binary had a dependency issue so I had to switch to Homebrew instead. Also note that some OpenCV packages might not come with Python binding by default (you need to use some command line options).
With Homebrew I ran:
brew install opencv
This installs OpenCV with Python bindings by default.
Just to test things out I recommend you run Python in interactive mode (run
python in CLI without any arguments) and write
import cv2. If OpenCV is installed properly and paths to Python bindings are correct it shouldn’t throw any errors.
Later, we’ll also use Python’s
numpy for some simple operations with matrices so we can install it now as well.
pip install numpy
Reading The Camera Image
Now we can test the camera:
import cv2 capture = cv2.VideoCapture(0) while True: ret, image = capture.read() cv2.imshow('Camera stream', image) if cv2.waitKey(1) & 0xFF == ord('q'): break
With
cv2.VideoCapture(0) we get access to the camera on index
0 which is the default (usually the built-in camera). If you want to use a different one, try numbers greater than zero; however, there’s no easy way to list all available cameras with the current OpenCV version.
When we call
cv2.imshow('Camera stream', image) for the first time it checks that no window with this name exists and creates a new one for us with an image from the camera. The same window will be reused for each iteration of the main loop.
Then we used
capture.read() to wait and grab the current camera image. This method also returns a Boolean property
ret in case the camera is disconnected or the next frame is not available for some reason.
At the end we have
cv2.waitKey(1) that checks for 1 millisecond whether any key is pressed and returns its code. So, when we press
q we break out of the loop, close the window and the app will end.
If this all works, we passed the most difficult part of the back-end app which is getting the camera to work.
Filtering Camera Images
For the actual circle detection we’re going to use circle Hough Transform which is implemented in
cv2.HoughCircles() method and right now is the only algorithm available in OpenCV. The important thing for us is that it needs a grayscaled image as input and uses the Canny edge detector algorithm inside to find edges in the image. We want to be able to manually check what the algorithm sees so we’ll compose one large image from four smaller images each with a different filter applied.
The Canny edge detector is an algorithm that processes the image in typically four directions (vertical, horizontal and two diagonals) and finds edges. The actual steps that this algorithm makes are explained in greater detail on Wikipedia or briefly in the OpenCV docs.
In contrast to pattern matching this algorithm detects circular shapes so we can use any objects we have to hand that are circular. I’m going to use a lid from an instant coffee jar and then an orange coffee mug.
We don’t need to work with full-size images (depends on your camera resolution, of course) so we’ll resize them right between
capture.read() and
cv2.imshow to 640px width and height accordingly to keep aspect ratio:
width, height = image.shape scale = 640.0 / width image = cv2.resize(image, (0,0), fx=scale, fy=scale)
Then we want to convert it to a grayscaled image and apply first median blur which removes noise and retains edges, and then the Canny edge detector to see what the circle detection algorithm is going to work with. For this reason, we’ll compose 2x2 grid with all four previews.
t = 100 # threshold for Canny Edge Detection algorithm grey = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blured = cv2.medianBlur(grey, 15) # Create 2x2 grid for all previews grid = np.zeros([2*h, 2*w, 3], np.uint8) grid[0:h, 0:w] = image # We need to convert each of them to RGB from greyscaled 8 bit format grid[h:2*h, 0:w] = np.dstack([cv2.Canny(grey, t / 2, t)] * 3) grid[0:h, w:2*w] = np.dstack([blured] * 3) grid[h:2*h, w:2*w] = np.dstack([cv2.Canny(blured, t / 2, t)] * 3)
Even though Canny edge detector uses Gaussian blur to reduce noise, in my experience it’s still worth using median blur as well. You can compare the two bottom images. The one on the left is just Canny edge detection without any other filter. The second image is also Canny edge detection but this time after applying median blur. It reduced objects in the background which will help circle detection.
Detecting Circles With Hough Gradient
Internally, OpenCV uses a more efficient implementation of Hough Circle Transform called Hough Gradient Method that uses edge information from Canny edge detector. The gradient method is described in depth in the book Learning OpenCV and the Circle Hough Transform on Wikipedia.
Now it’s time for the actual circle detection:
sc = 1 # Scale for the algorithm md = 30 # Minimum required distance between two circles # Accumulator threshold for circle detection. Smaller numbers are more # sensitive to false detections but make the detection more tolerant. at = 40 circles = cv2.HoughCircles(blured, cv2.HOUGH_GRADIENT, sc, md, t, at)
This returns an array of all detected circles. For simplicity’s sake, we’ll care only about the first one. Hough Gradient is quite sensitive to really circular shapes so it’s unlikely that this will result in false detections. If it did, increase the
at parameter. This is why we used median blur above; it removed more noise so we can use a lower threshold, making the detection more tolerant to inaccuracies and with a lower chance of detecting false circles.
We’ll print circle center and its radius to the console and also draw the found circle with its center to the image from camera in a separate window. Later, we’ll send it via WebSocket to the browser. Note, that
x,
y and
radius are all in pixels.
if circles is not None: # We care only about the first circle found. circle = circles[0][0] x, y, radius = int(circle[0]), int(circle[1]), int(circle[2]) print(x, y, radius) # Highlight the circle cv2.circle(image, [x, y], radius, (0, 0, 255), 1) # Draw a dot in the center cv2.circle(image, [x, y], 1, (0, 0, 255), 1)
This will print to console tuples like:
(251, 202, 74) (252, 203, 73) (250, 202, 74) (246, 202, 76) (246, 204, 74) (246, 205, 72)
As you can see on this animation, it failed to find any circles at all. My built-in camera has only 15fps and when I move my hand quickly the image is blurred so it doesn’t find circle edges, not even after applying filters.
At the end of this article we’ll come back to this problem and talk a lot about camera-specific settings and choice of detection algorithm, but we can already say that even though my setup is very bad (only 15fps, poor lighting, a lot of noise in the background, the object has low contrast), the result is reasonably good.
That’s all for now. We have the
x and
y coordinates and
radius in pixels of a circle found in the webcam image.
You can see the full source code for this part on gist.github.com.
2. JavaScript Front-End With Three.js In Browsers
The front-end part is based on the Three.js (version r72) library. We’ll start by just creating a rotating textured sphere representing Earth in the center of the screen, then add the moon spinning around it. At the end we’ll map 2D screen mouse coordinates to the 3D space.
Our HTML page will consist of just a single
<canvas> element. see index.html on gist.github.com.
Creating Earth
JavaScript is going to be a bit longer but it’s split into multiple initialization functions where each has single purpose. Earth and moon textures come from planetpixelemporium.com. Note that when loading textures, CORS rules are applied.
var scene, camera, renderer, light, earthMesh, earthRotY = 0; function initScene(width, height) { scene = new THREE.Scene(); // Setup cameta with 45 deg field of view and same aspect ratio camera = new THREE.PerspectiveCamera(45, width / height, 0.1, 1000); // Set the camera to 400 units along `z` axis camera.position.set(0, 0, 400); renderer = new THREE.WebGLRenderer({ antialias: true, alpha: true }); renderer.setSize(width, height); renderer.shadowMap.enabled = true; document.body.appendChild(renderer.domElement); } function initLight() { light = new THREE.SpotLight(0xffffff); // Position the light slightly to a side to make shadows look better. light.position.set(400, 100, 1000); light.castShadow = true; scene.add(light); } function initEarth() { // Load Earth texture and create material from it var earthMaterial = new THREE.MeshLambertMaterial({ map: THREE.ImageUtils.loadTexture("/images/earthmap1k.jpg"), }); // Create a sphere 25 units in radius and 16 segments // both horizontally and vertically. var earthGeometry = new THREE.SphereGeometry(25, 16, 16); earthMesh = new THREE.Mesh(earthGeometry, earthMaterial); earthMesh.receiveShadow = true; earthMesh.castShadow = true; // Add Earth to the scene scene.add(earthMesh); } // Update position of objects in the scene function update() { earthRotY += 0.007; earthMesh.rotation.y = earthRotY; } // Redraw entire scene function render() { update(); renderer.setClearColor(0x000000, 0); renderer.render(scene, camera); // Schedule another frame requestAnimationFrame(render); } document.addEventListener('DOMContentLoaded', function(e) { // Initialize everything and start rendering initScene(window.innerWidth, window.innerHeight); initEarth(); initLight(); // Start rendering the scene requestAnimationFrame(render); });
This was mostly just basic Three.js stuff. Object and method names are self-explantory (like
receiveShadow or
castShadow) but if you’ve never used it before I strongly recommend you look at Lee Stemkoski’s tutorials.
Optionally, we could also draw an axis in the center of the screen to help us with the coordinate system.
var axes = new THREE.AxisHelper(60); axes.position.set(0, 0, 0); scene.add(axes);
Adding The Moon
Creating the moon is going to be very similar. The main difference is that we need to set the moon’s position relative to Earth.
function initMoon() { // The same as initEarth() with just different texture } // Update position of objects in the scene function update() { // Update Earth position // ... // Update Moon position moonRotY += 0.005; radY += 0.03; radZ += 0.0005; // Calculate position on a sphere x = moonDist * Math.cos(radZ) * Math.sin(radY); y = moonDist * Math.sin(radZ) * Math.sin(radY); z = moonDist * Math.cos(radY); var pos = earthMesh.position; // We can keep `z` as is because we're not moving the Earth // along z axis. moonMesh.position.set(x + earthMesh.pos.x, y + earthMesh.pos.y, z); moonMesh.rotation.y = moonRotY; }
See a live demo here.
Mapping 2D Coordinates To A 3D World
So far, everything is pretty obvious. The most interesting part is going to be how to covert 2D screen coordinates coming from OpenCV (see output of circular detection above) to a 3D world? When we were defining radii and positions in Three.js we used some units but these have nothing to do with actual screen pixels. In fact, dimensions of everything we see in the scene are highly dependent on our camera settings (like aspect ratio or field of view).
For this reason, we’ll make a flat plane object that will be large enough to cover the entire scene with its center at
[0,0,0]. For demonstration purposes we’ll map 2D mouse coordinates to Earth’s position in 3D with a fixed
z axis. In other words, we’ll convert only
x and
y and not worry about
z, which is the distance from the object to our camera.
We’ll convert mouse screen positions into a range from
-1.0 to
+1.0 with its center at
[0,0] because we need to work with normalized vectors.
Later we’ll use this exact technique to map the position of the detected circle to 3D and also to match circle size from 2D to 3D.
var mouse = {}; function initPlane() { // The plane needs to be large to always cover entire scene var tmpGeometry = new THREE.PlaneGeometry(1000, 1000, 1, 1); tmpGeometry.position = new THREE.Vector3(0, 0, 0); var tmpMesh = new THREE.Mesh(tmpGeometry); } function onDocumentMouseMove(event) { // Current mouse position with [0,0] in the center of the window // and ranging from -1.0 to +1.0 with `y` axis inverted. mouse.x = (event.clientX / window.innerWidth) * 2 - 1; mouse.y = - (event.clientY / window.innerHeight) * 2 + 1; } function update() { // ... the rest of the function // We need mouse x and y coordinates to set vector's direction var vector = new THREE.Vector3(mouse.x, mouse.y, 0.0); // Unproject camera distortion (fov, aspect ratio) vector.unproject(camera); var norm = vector.sub(camera.position).normalize(); // Cast a line from our camera to the tmpMesh and see where these // two intersect. That's our 2D position in 3D coordinates. var ray = new THREE.Raycaster(camera.position, norm); var intersects = ray.intersectObject(tmpMesh); earthMesh.position.x = intersects[0].point.x; earthMesh.position.y = intersects[0].point.y; }
Since we’re checking the intersection with a plane, we know there’s always going to be only one.
That’s all for this part. At the end of the next part we’ll also add WebSockets and a
<video> element with our camera stream that will be overlaid by the 3D scene in Three.js.
3. WebSockets In Both Front-End And Back-End
We can start by implementing WebSockets in the Python back-end by installing
simple-websocket-server libraries. There’re many different libraries like Tornado or Autobahn. We’ll use
simple-websocket-server because it’s very easy to use and has no dependecies.
pip install git+
We’ll run the WebSocket server in a separate thread and keep track of all connected clients.
from SimpleWebSocketServer import SimpleWebSocketServer, WebSocket clients = [], server = None class SimpleWSServer(WebSocket): def handleConnected(self): clients.append(self) def handleClose(self): clients.remove(self) def run_server(): global server server = SimpleWebSocketServer(’, 9000, SimpleWSServer, selectInterval=(1000.0 / 15) / 1000) server.serveforever() t = threading.Thread(target=run_server) t.start() # The rest of the OpenCV code ...
We used the
selectInterval parameter in the server’s constructor to make it periodically check for any pending messages. The server sends messages only when receiving data from clients, or it needs to sit on the main thread in a loop. We can’t let it block the main thread because OpenCV needs it as well. Since we know that the camera runs only at 15fps we can use the same interval on the WebSocket server.
Then, after we detect the circles, we can iterate all connected clients and send the current position and radius relative to the image size.
for client in clients: msg = json.dumps({'x': x / w, 'y': y / h, 'radius': radius / w}) client.sendMessage(unicode(msg))
You can see the full source code for the server is on gist.github.com.
The JavaScript part will mimic the same behavior as we did with mouse position. We’ll also keep track of the few messages and calculate a mean value for each axis and radius to improve accuracy.
var history = []; var ws = new WebSocket('ws://localhost:9000'); ws.onopen = function() { console.log('onopen'); }; ws.onmessage = function (event) { var m = JSON.parse(event.data); history.push({ x: m.x * 2 - 1, y: -m.y * 2 + 1, radius: m.radius}); // ... rest of the function. };
Instead of setting Earth’s position to my current mouse position we’ll use the
msgHistory variable.
It’s probably not necessary to paste the entire code here, so feel free to look at implementation details on gist.gihtub.com.
Then add one
<video> element with the webcam stream filling the entire window that will be overlaid by our 3D scene with a transparent background.
var videoElm = document.querySelector('video'); // Make sure the video fits the window. var constrains = { video: { mandatory: { minWidth: window.innerWidth }}}; if (navigator.getUserMedia) { navigator.getUserMedia(constrains, function(stream) { videoElm.src = window.URL.createObjectURL(stream); // When the webcam stream is ready get it's dimensions. videoElm.oncanplay = function() { init(videoElm.clientWidth, videoElm.clientHeight); // Init everything ... requestAnimationFrame(render); } }, function() {}); }
The final result:
To quickly recap what we did and what the above video shows:
- Python back-end runs a WebSocket server.
- Server detects a circle using OpenCV from a webcam stream.
- JavaScript client displays the same webcam stream using
<video>element.
- Client renders 3D scene using Three.js.
- Client connects to the server via WebSocket protocol and receives circle position and radius.
The actual code used for this demo is available on GitHub. It’s slightly more sophisticated and also interpolates coordinates between two messages from the back-end because the webcam stream runs only at 15fps while the 3D scene is rendered at 60fps. You can see the original video on YouTube.
Caveats
There are some findings worth noting:
Circle Detection Isn’t Ideal
It’s great that it works with any circular object but it’s very sensitive to noise and image deformation, although as you can see above our result is pretty good. Also, there are probably no practical examples of circle detection available apart from the most basic usage. It might be better to use ellipse detection but it’s not implemented in OpenCV right now.
Everything Depends On Your Setup
Built-in webcams are generally pretty bad. 15fps isn’t enough and just increasing it to 30fps reduces motion blur significantly and makes detection more reliable. We can break down this point into four more points:
- Camera distortions
Many cameras introduce some image distortion, most commonly a fish-eye effect which has a significant influence on shape detection. OpenCV’s documentation has a very straightforward tutorial on how to reduce distortion by calibrating your camera.
- There’s no official list of devices supported by OpenCV
Even if you already have a good camera it might not work with OpenCV without further explanation. I’ve also read about people using some other library to capture a camera image (like libdc1394 for IEEE 1394-based cameras) and then using OpenCV just to process the images. Brew package manager lets you compile OpenCV directly with libdc1394 support.
- Some cameras work better with OpenCV than others
If you’re lucky you can set some camera options like frames per second directly on your camera but it might also have no effect at all if OpenCV isn’t friendly with your device. Again, without any explanation.
- All parameters depend on a real-world usage
When used in a real-world installation, it’s highly recommended to test the algorithms and filters in the actual environment because things like lights, background color or object choice have significant effects on the result. This also includes shadows from daylight, people standing around, and so on.
Pattern Matching Is Usually A Better Choice
If you see any augmented reality used in practice it’ll be probably based on pattern matching. It’s generally more reliable and not so affected by the problems described above.
Filters Are Crucial
I think correct usage of filters requires some experience and always a little magic. The processing time of most filters depends on their parameters, although in OpenCV 3.0 some of them are already rewritten into CUDA C (a C-like language for highly parallel programming with NVIDIA graphics cards) which brings significant performance improvements.
Filter Data From OpenCV
We’ve seen that circle detection has some inaccuracies: sometimes it fails to find any circle or it detects the wrong radius. To minimize this type of error it would be worthwhile to implement some more sophisticated method to improve accuracy. In our example we used median for
x,
y and
radius, which is very simple. A commonly used filter with good results is the Kalman filter, used by autopilots for drones to reduce inaccuracy coming from sensors. However, its implementation isn’t as simple as using just
math.mean() from.
Conclusion
I first saw a similar application in the National Museum of Natural History in Madrid two years ago and I wondered how difficult it would be to make something similar.
My core idea behind this demo was to use tools that are common on the web (like WebSockets and Three.js) and don’t require any prerequisites so anyone can start using them right away. That’s why I wanted to use just circle detection and not pattern matching, which would require to print or have some particular real-world object.
I need to say that I severely underestimated the actual camera requirements. High frames per second and good lighting are more important than resolution. I also didn’t expect that camera incompatibility with OpenCV would be an issue.
|
https://www.smashingmagazine.com/2016/02/simple-augmented-reality-with-opencv-a-three-js/
|
CC-MAIN-2019-22
|
refinedweb
| 3,869
| 57.67
|
Closed Bug 828116 Opened 10 years ago Closed 9 years ago
move some toolkit modules to toolkit/modules
Categories
(Toolkit :: General, defect)
Tracking
()
mozilla24
People
(Reporter: Gavin, Assigned: ananuti)
Details
(Whiteboard: [mentor=gavin])
Attachments
(1 file, 6 obsolete files)
We have toolkit/modules now (bug 813833), so we should move PopupNotifications there.
There are a few of these, actually.
Summary: move PopupNotifications to toolkit/modules → move some toolkit modules to toolkit/modules
I think all of the jsms in toolkit/content could move to toolkit/modules, actually. Probably best to wait until after bug 784841 :)
Whiteboard: [mentor=gavin]
hi, can you assign this bug to me ?
Sure thing! Let me know if you need any advice.
Assignee: nobody → harshit080
i had a look on toolkit/content there is PopupNotifications.jsm and some other jsm , so which files we suppose to move to toolkit/module, only PopupNotifications.jsm or all other jsm files ?, and as this is my first bug please guide me as you can. Thanks
i have moved the all jsm files toolkit/modules from toolkit/content,what to do next in order to submit the bug ?
Status: NEW → ASSIGNED
You'll need to attach the patch here. See .
I'd like to add the modules currently living in toolkit/mozapps/shared to this along with the tests that go along with them please.
okay i have no problem,Gavin what to do ?Although i have created a patch of this bug.
I'm not sure which part you're unsure about. You'll need to add those changes to your patch, and then attach your patch to this bug and ask for review, as mentioned in comment 8.
Harsh, could you confirm that you're still working on this?
Flags: needinfo?(harshit080)
Assignee: harshit080 → nobody
Flags: needinfo?(harshit080)
Assignee: nobody → ananuti
Attachment #742670 - Flags: review?(gavin.sharp)
Thanks for the patch! Any of the tests under toolkit/content/tests/ that are associated with these modules will need to move as well. Can you include those in the patch too?
Attachment #742670 - Attachment is obsolete: true
Attachment #742670 - Flags: review?(gavin.sharp)
Attachment #742880 - Flags: feedback?(gavin.sharp)
Attachment #742671 - Attachment is obsolete: true
Attachment #742882 - Flags: feedback?(dtownsend+bugmail)
Comment on attachment 742882 [details] [diff] [review] part 2 take 2 - move modules in toolkit/mozapps/shared to toolkit/modules Review of attachment 742882 [details] [diff] [review]: ----------------------------------------------------------------- Looks good to me!
Attachment #742882 - Flags: feedback?(dtownsend+bugmail) → feedback+
Comment on attachment 742880 [details] [diff] [review] part 1 take 2 - move all jsms in toolkit/content to toolkit/modules Review of attachment 742880 [details] [diff] [review]: ----------------------------------------------------------------- Looking good. Make the changes here and make sure that all the tests pass before asking for final review. You might as well combine both the patches into one review request. Me or gavin is fine as reviewer. ::: toolkit/content/Makefile.in @@ -50,5 @@ > DEFINES += -DMOZ_TOOLKIT_SEARCH > endif > > EXTRA_JS_MODULES = \ > debug.js \ Take debug.js along as well I think ::: toolkit/modules/Makefile.in @@ +25,4 @@ > TelemetryTimestamps.jsm \ > Timer.jsm \ > + PrivateBrowsingUtils.jsm \ > + Sqlite.jsm \ Can you put these in alphabetical order :::.
Attachment #742880 - Flags: feedback?(gavin.sharp) → feedback+
(In reply to Dave Townsend (:Mossop) from comment #19) > ::: toolkit/content/Makefile.in > @@ -50,5 @@ > > DEFINES += -DMOZ_TOOLKIT_SEARCH > > endif > > > > EXTRA_JS_MODULES = \ > > debug.js \ > > Take debug.js along as well I think Done. > > ::: toolkit/modules/Makefile.in > @@ +25,4 @@ > > TelemetryTimestamps.jsm \ > > Timer.jsm \ > > + PrivateBrowsingUtils.jsm \ > > + Sqlite.jsm \ > > Can you put these in alphabetical order Done. > > :::. Done.
Attachment #742880 - Attachment is obsolete: true
Attachment #742882 - Attachment is obsolete: true
Try build are burning I don't quite understand the log. What should I do next?
Flags: needinfo?(gavin.sharp)
pushed only path 1 (move all jsms in toolkit/content to toolkit/modules) to try the patch breaks several things. please let me know what to do next.
Attachment #747244 - Attachment is obsolete: true
(In reply to Ekanan Ketunuti from comment #21) > Try build are burning > > I don't quite understand the log. What should I do next? The error here is: *** Must define relativesrcdir when defining XPCSHELL_TESTS.. Stop. You need to add relativesrcdir to toolkit/modules/Makefile.in, look elsewhere in the source for examples. (In reply to Ekanan Ketunuti from comment #23) > Created attachment 747352 [details] [diff] [review] > fold patch > > > > the patch breaks several things. please let me know what to do next. The errors show lots of references to Services.search being undefined. If you look in Services.jsm you'll see that the search property is dependant on MOZ_TOOLKIT_SEARCH. Searching the source for that shows that it is added as a definition for the preprocessor here:. So you need to move that block to toolkit/modules/Makefile.in.
Flags: needinfo?(gavin.sharp)
this is now green on try :)
Attachment #747352 - Attachment is obsolete: true
Attachment #747885 - Flags: review?(dtownsend+bugmail)
Comment on attachment 747885 [details] [diff] [review] Move modules in toolkit/content and toolkit/mozapps/shared to toolkit/modules Excellent work, thanks!
Attachment #747885 - Flags: review?(dtownsend+bugmail) → review+
Landed:
Target Milestone: --- → mozilla24
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Flags: in-testsuite+
Resolution: --- → FIXED
|
https://bugzilla.mozilla.org/show_bug.cgi?id=828116
|
CC-MAIN-2022-27
|
refinedweb
| 846
| 59.6
|
Array is a container which can hold a fix number of items and these items should be of the same type. Most of the data structures make use of arrays to implement their algorithms. Following are the important terms to understand the concept of Array.
Element − Each item stored in an array is called an element.
Index − Each location of an element in an array has a numerical index, which is used to identify the element.
public class ArrayExample { public static void main(String args[]){ int myArray[] = {44, 69, 89, 635}; for (int i = 0; i<myArray.length; i++){ System.out.println(myArray[i]); } } }
44 69 89 635
|
https://www.tutorialspoint.com/what-is-an-array-data-structure-in-java
|
CC-MAIN-2021-25
|
refinedweb
| 108
| 67.15
|
Have you or your organization developed customizations for ArcGIS Desktop with the ArcObjects SDK, or developed add-ins for ArcGIS Pro with the Pro SDK? We welcome your feedback in the new ArcGIS Desktop Development Survey, which is now available online.
There is good documentation in the Esri Resources on how to customise your ArcGIS Pro ribbon so a user can add the tools they use most into the Pro ribbons for easy access, and remove ones they don't use.
Customize the ribbon options—ArcGIS Pro | Documentation
Customize ArcGIS Pro with geoprocessing tools—ArcGIS Pro | Documentation
Where I have found it harder to interpret the existing help documentation, is identifying what ribbon configuration is included as part of a shared Project Package.. Share a project package—ArcGIS Pro | Documentation
I did some testing to identify what customisation of the ribbon i could do, that would be shared as part of a project package. The answer is that most of the ribbon customisation is cached locally, and not included in a project package. The purpose is for a user to be able to customise their setup to more greatly enable themselves to work.
This does present as a limitation if an organisation wants to influence editing workflows by making tested GP tools available in the ribbon, and removing tools that they dont wish users to use (this could be due to customisation or other factors).
One configuration of the ribbon that did successfully get included in a published project package, was the configuration i did in the Customize Analysis Gallery. This is accessible in the Analysis tab, in the Tools group and displays as a window of tools you can scroll through.
I haven't tested all options, and Id be interested in seeing what other ribbon configurations people have played around with that publish as part of a project package.
But in summary, for those skipping through the content - use the Analysis Gallery to set tools you want to be saved as part of a project package#
Happy Mapping
Get netCDF sample data here - Example netCDF files
For more questions, please contact nnayak@esri.com
Sent.
ArcGIS Pro 2.6 brings some new capabilities for working with geopackages. Here I take a look at some basic functions and try out editing gpkg layers as well as adding rasters and shapefiles to a gpkg. For the most part things seem to work and there's a workaround for using the Raster to Geopackage tool.
If you're working with gpkg in ArcGIS Pro and have any tips or suggestions please let me know.
Geopackages in ArcGIS Pro 2.6 | burdGIS - YouTube
No:)
Not being a regular user of pandas/numpy I find using such libraries difficult as I cannot visualise what I'm working with... call me old skool...
I recently came across dtale a rather cool python module that displays the data and allows you to manipulate it as if it was a spreadsheet. It also has a set of rather impressive methods for charting your data.
I immediately thought it would be great to use this inside ArcPro, using the notebook capability built right into ArcPro 2.5!
cd C:\Users\XXX\AppData\Local\ESRI\conda\envs\YYY
where XXX is your user name and YYY is the cloned environment folder name
cd Scripts
pip install --upgrade dtale
This will install d-tale into the cloned environment and will be accessible within ArcPro next time you open it.
So here is some sample code I then type into notebook in ArcPro, it takes a layer loaded in the map and creates a dataframe from 2 numeric fields, when you execute `d` a URL pops up and you click on it to see your data as a spreadsheet
import dtaleimport pandas as pdimport arcpynp = arcpy.da.FeatureClassToNumPyArray("EA_Sample_2020",["LOC_NO","Z020_LOC_T"])df = pd.DataFrame(np)d = dtale.show(df)d
So in the window I create a new field called acs and its the SUM of the previous two fields
To get this new an improved data back into a pandas data frame you can type the following code into notebook then do something with it.
df2 = d.data.copy()df2.head()
Welcome to another edition of This Week’s Picks – ArcGIS Pro! In my day-to-day I often browse GeoNet and other areas where product discussions occur to get a sense for what’s coming up with the product, and after spending a fair amount of time with 2.5 there were some areas I wanted to explore further based on past questions and support cases. Fortunately, there were some resources recently published that cover these topics that I wanted to share in case you hadn’t had a chance to explore them yet.
This week we will look at the following:
Without further ado…
Customize the layout gallery
First up, customizing the layout gallery. In working with this release early on I got a chance to check out some of the layout changes coming in 2.5 and I was very excited about the ability to customize my layout gallery and have a preview before adding these (layout files in the .pagx format). Obviously not every layout is suitable for every application and building custom layouts and putting them in a gallery allows for making minor tweaks without building new layouts for every project. Aubri’s blog takes you step-by-step on how to access and customize the gallery for your own purposes, new at 2.5. Check it out right here!
Pro Tip: Aubri points out that the layout names in the gallery don’t necessarily align with the file names (or filenames if you prefer). This is because the title is read directly from the metadata, so you need to edit it there to update it (along with thumbnail, etc.):
Select a layout file from the import gallery
As you may have read in the ArcGIS Ideas implemented at 2.5 video, several improvements with tables originated as community contributed ideas. Find and replace for example is one that has been in high demand. The Map Exploration team also worked on several other usability and productivity improvements that you can take advantage of. For example, default positions for attribute tables is now configurable and freezing columns is also possible. One more that I have been using heavily is configuring pop-ups using a raster field in a table. Check out the blog here which includes documentation links for each of these new areas.
Create and add Python notebooks in ArcGIS Pro
The last pick this week covers the new built-in integration with ArcGIS Notebooks that allows you to both create and edit Jupyter notebooks right inside ArcGIS Pro. This increasingly popular open-source tool has been common in the Python data science community and can now be used alongside your other project elements to stay systematized and do things like visualizing Pandas Dataframes or prototyping workflows. This blog will take you step-by-step on how to create a new notebook, import an existing notebook and launch a notebook all within Pro! Give it a look through over here.
That wraps up issue #9 of This Week’s Picks – ArcGIS Pro. I hope you found these resources useful and thanks for reading! As usual, stay tuned for future picks and if you are interested please also check out This Week's Picks - ArcGIS Online and ArcGIS Enterprise by my colleagues. Thanks again for reading and happy mapping!
I’ve been using ArcGIS Pro for several years now and strangely, I only recently learned about a quite cool capability of creating symbology based on colors that are stored as attributes (i.e. either as RGB values, HEX color codes or CMYK).
After spending an hour trawling through the Geonet Forums I saw a few threads where people were discussing this, however I was unable to find a defined process for setting this symbology up. The documentation doesn’t seem to contain it either, so I decided to quickly summarize the workflow in a quick blog post.
So, the situation that we’re going to be looking at is this:
A local council have their zoning polygons imported from MapInfo and they have three attribute fields (as below) – each storing a color (either Red, Green or Blue). Each zoning code must have a specific color used in the map to depict it, so we’re going to use these attributes to create symbology in ArcGIS Pro 2.5.
Step 1: Firstly, we need to create a new text (STRING) field:
Add a field in the Fields Designer or use a geoprocessing tool Add Field.
If you’re using the Field Designer in Pro, don’t forget to save your changes:
Step 2: Calculate the field values. In my example, I will be using Python. You can also create an expression using Arcade. Some examples are given in this Geonet thread:...
In my case, I used the following expression: "rgb("+str(!RED!)+", "+str(!GREEN!)+", "+str(!BLUE!)+")"
The output in the new color field should look like the following:
Other examples of how the color can be defined in that text field are given here:....
Step 3: Now, the interesting stuff. Setup the attribute-driven symbology
Enable the attribute-driven symbology (based on the combined RGB values). Unless you have a subtype field defined in the feature class, your starting point will be something similar to this:
The layer in the Map will show the colors that will be read directly from the attribute table. However, there’s one problem. Even though the map will show the colors from the attribute table (stored as RGB, HEX or CMYK), the layer on the contents pain will still be showing <all other values>. You won’t be able to automatically create a legend or publish this straight to AGOL.
Our tech. support people investigated this, and this seems to be an issue that had been logged as the following bug: BUG-000104316: Attribute-driven symbology assigned to features is not reflected in the legend information for the feature layer
Hope it’s going to get addressed! In the meantime, hope the above is helpful. Please feel free to reach out if you have any questions or comments. Happy mapping!
|
https://community.esri.com/t5/arcgis-pro-blog/bg-p/arcgis-pro-blog
|
CC-MAIN-2020-50
|
refinedweb
| 1,720
| 57.71
|
How can I write information into a file on my server in java servlets?
How can I write information into a file on my server in java servlets?
go to the Java API and look up the java.io package. you write to a file from a servlet the same way you write to a file from any other java code (except applets). i advise using a FileWriter wrapped in a BufferedWriter. good luck. here's another link to the servlet API. the APIs are your best friends. when i have a question about how to implement something, i go there and 90% of the time I find the answer.
I've already tryed a FileWriter wrapped in a BufferOutputStream. I have the following code:
import java.io.*;
import javax.servlet.*;
import javax.servlet.http.*;
import java.util.*;
import java.net.*;
public class MindmapWWWBoard extends HttpServlet {
String username;
String message;
private Writer out2;
public void doPost(HttpServletRequest request,
HttpServletResponse response)
throws ServletException, IOException {
response.setContentType("text/html");
PrintWriter out = response.getWriter();
postReceipt(request, out);
}
private void postReceipt(HttpServletRequest request,
PrintWriter out) {
username = request.getParameter("username");
message = request.getParameter("message");
out.println("<html>n" +
"<head>n" +
"<title>n" +
"Your Postn" +
"</title>n" +
"</head>n" +
"<body>n" +
"<b>Username:</b> " + username + "<br>" +
"<b>Message:</b> " + message +
"</body>n" +
"</html>");
try {
FileWriter fw = new FileWriter("mindmap.html", true);
BufferedWriter out2 = new BufferedWriter(fw);
out2.write(message);
out2.flush();
out2.close();
}
catch (IOException e) { }
}
}
Does anyone know why this isn't working? Thanks in advance.
It compiles ok, but when I run it on the server, it doensn't write to the HTML file. It shows the username and message after post.
how about this:
that's bound to have a bug or two but i'm sure you can fix it up.that's bound to have a bug or two but i'm sure you can fix it up.Code:File file = new File("/home/your/path", "mindmap.html"); byte[] fileBuffer; FileOutputStream outstream; FileInputStream instream; int fileLength; if (file.exists()) fileLength=(int)file.length(); else fileLength=0; // buffer for your message + file's contents fileBuffer = new byte[message.length() + fileLength]; // stream in the file to the buffer if (file.exists()) { try { inStream = new FileInputStream(file); int numRead = 0; int counter = 0; while (numRead!=-1 && counter<fileLength) { numRead = inStream.read(fileBuffer, counter, fileLength-counter); counter += numRead; } } catch... } byte[] messageBytes = message.getBytes(); //add message to the file buffer for (int j=fileLength; j<fileBuffer.length; j++) fileBuffer[j] = messageBytes[j-fileLength]; //shove fileBuffer back into file try { outStream = new FileOutputStream(file); outStream.write(fileBuffer); outStream.close(); } catch...
|
http://forums.devshed.com/java-help-9/writing-server-10132.html
|
CC-MAIN-2014-10
|
refinedweb
| 433
| 60.21
|
As we've seen, every
standalone Java program must declare a method with exactly the
following signature:
public static void main(String[ ] args)
This signature says that an array of
strings is passed to the main( ) method. What are
these strings, and where do they come from? The
args array contains any arguments passed to the
Java interpreter on the command line, following the name of the class
to be run. Example 1-4 shows a program,
Echo, that reads these arguments and prints them
back out. For example, you can invoke the program this way:
% java je3.basics.Echo this is a test
The program responds:
this is a test
In this case, the args
array has a length of four. The first element in the array,
args[0], is the string
"this", and the last element of the
array, args[3], is
"test". As you can see, Java arrays
begin with element 0. If you are coming from a
language that uses one-based arrays, this can take quite a bit of
getting used to. In particular, you must remember that if the length
of an array a is n, the last
element in the array is a[n-1]. You can determine
the length of an array by appending .length to its
name, as shown in Example 1-4.
This example also demonstrates the use
of a while loop. A while loop
is a simpler form of the for loop; it requires you
to do your own initialization and update of the loop counter
variable. Most for loops can be rewritten as a
while loop, but the compact syntax of the
for loop makes it the more commonly used
statement. A for loop would have been perfectly
acceptable, and even preferable, in this example.
package je3.basics;
/**
* This program prints out all its command-line arguments.
**/
public class Echo {
public static void main(String[ ] args) {
int i = 0; // Initialize the loop variable
while(i < args.length) { // Loop until the end of array
System.out.print(args[i] + " "); // Print each argument out
i++; // Increment the loop variable
}
System.out.println( ); // Terminate the line
}
}
|
http://books.gigatux.nl/mirror/javaexamples/0596006209_jenut3-chp-1-sect-4.html
|
CC-MAIN-2019-13
|
refinedweb
| 355
| 72.26
|
15. Appendix¶
15.1. Interactive Mode¶
15.1.1. Error Handling¶ Delete) to the primary or
secondary prompt cancels the input and returns to the primary prompt. [1]
Typing an interrupt while a command is executing raises the
KeyboardInterrupt exception, which may be handled by a
try
statement.
15.1.2. Executable Python Scripts¶.
15.1.3. The Interactive Startup File¶(open('.pythonrc.py').read()).
If you want to use the startup file in a script, you must do this explicitly
in the script:
import os filename = os.environ.get('PYTHONSTARTUP') if filename and os.path.isfile(filename): with open(filename) as fobj: startup_file = fobj.read() exec(startup_file)
15.1.4. The Customization Modules¶
|
https://docs.python.org/2/tutorial/appendix.html
|
CC-MAIN-2019-18
|
refinedweb
| 114
| 53.68
|
Quick Links
RSS 2.0 Feeds
Lottery News
Event Calendar
Latest Forum Topics
Web Site Change Log
RSS info, more feeds
Topic closed. 4 replies. Last post 2 years ago by rcbbuckeye.
Well, a Quick Pick won the $925,000 Texas 2-Step on 10-05-15. Congrats. But, there is something curious? interesting? about the winner - The address of the store and the zip code PLUS the Jackpot amount. Here is the info from Texas Lotto site:
$925,000 1 Winner
Where Sold:
Ok. 1. Look at the address. 12725, now look a bit harder. See anything?
Now, look at the Zip code. See it? If not, add up the numbers of both. 1-2-7-2-5 is 17, but if you add the Zip first 7-7-4-8-9 = 35, then use the Jackpot as a guide, where the numbers are the first (3), then 000. 1+2+7 + 25! = 35. Now, this is a regular occurrence with Texas Lotto. Now, if you use the no-repeat +1 for LIKE numbers, then 12725 would be 127 35. In my Lotto usage, 0=7 and 7=0, Ive found this to be uncannily accurate. I've always ascertained that there is something to do with the selling store's address, the Zip Code and the packet number. Almost always with a QPick.
Now, I also use the alt-value (in every one of my posts) where 9=4, 8=3 and 6=5. 7=0 and 0=7. For grins, look at this.
9+2+5 = 16 if nothing is changed. And, or using my theory, 4+2+5 = 11. If you apply this to the address, you get 1-2-0-3-5 = 11. That's one piece.
Then, the Zip - 77489 OR 00489 = 00435 (the 9 is 4 and 00434 = yep, 11. My point is this: ALL 3 numbers in the Where Sold Quick Pick info, with the 'alt-value' adjustments, the Jackpot, the address and the Zip all equal 11. Oh, address: 12035 = 11. And, if you simply use the $925K and the address 12725, not changing anything, 925,000 (925,000 is 925,708, where the 000 is 777 but no repeating or side-by-side numbers, then if you change the Jackpot to the 'alt-value' system I use, you will get 925,712 L>R = 26 and 12725 = 26. This is only giving value to the 000. Then, the Zip - 77489 is (if using same 7-0 logic, 01489 (first 7=0 and 2nd 7 =1, so 01489 is 01489 (9=4) so 01484 = 26. 1 can be 10, and my point is, there is some sort of similarity in where the Jackpot = the address and the zip BOTH.
Another item to view: Say you take away LIKE numbers in all three: 925,000 and address 12725, then LIKE's are 2 and 5. Then, 925,000 would be 9XX,XXX (removed 2,5 and 0) and in address, 12035 = 1XX3X (removed ONE 2 and 7,5) and 2nd 2 is a LIKE 2+1=3. So, in Jackpot, total is 9 or 4 and in address, 1+3 = 4. Will this work for the Zip? 01489 is 01489 or using alt-values, 01435 (9=4 and 4 has been used, so LIKE 4+1=5. End up with this for Zip: 00439, now back to jackpot 925,000 - the LIKE number is 9 and 0, so zip is XX43X and Jackpot is X25,XXX - and yep - BOTH = 7. With respect to the 77, it would be 77=00, the side-by-side 7s the 2nd 7 would be a '0' (as in 2nd 7) 77=00, so you would have 00439. Remember, you are removing LIKE numbers of each, so the result would be 925,000 = X25,XXX and zip 00489 would first be 00439 because 8=3, but because a 4 is used before the 9, the 9 would remain a 9 - if it was a 4, it would change to a 5. So, 00439, would be XX43X (removing the 9 and 0). Result? BOTH = 7.
The reason I'm showing this, is my opinion (belief) that there is a mathematical algorithm or logarithm that says the address, zip and Jackpot must all 'match-up' in some way with regard to the Lotto's equation, since it was a Quick Pick. Now, this can be carried further with the Jackpot, using the three 000s and I'm certain, the results would be the same. I am by no means verifying my theory, but I've noticed there DOES seem to be a correlation between winners and the selling store's address, zip, and when shown, the packet number. They all add up when you apply two simple concepts - alternative values and the side-by-side (no repeat in the line) numbers. Again, IMO, the very reason for alt-values is to make more added totals single digit numbers. Thus, making 7=0 and 9=4 etc.
I have some more PICK 3 (an actual system) that I'm going to post later today-tonight when I return (Dr. appt). THIS has worked for me in the last week and it involves the DRAW# with the two 6-digit numbers (one on left and one on right) and I will show you that buying ONE QP will give you enough info to calculate the winning numbers. In a simple summation, it's adding two numbers to get the first number, then moving L>R or R<L the # of moves of the last number added (ex: 12375, you find the largest number, then the two numbers that add up to, in this case, the 7). So, remove the 5 and 2. depending on which direction (this is EASILY figured out by the date, with each draw alternating R and L) say 12375, you know the largest # is the 7, so remove 2 and 5. Say in this case, the adding starts L>R on 1, you would go 1 to 3, 1+3=4. First number. Then you will move 3 moves (counting the first move ON the 3, so 3,7,skipping 5 back to 1 at beginning and THEN, to get the next number, you add the FIRST number, 4 with the total of 2nd number, another 4 (3+1). You get 8. Then, you reduce the 8 to the alt-value of 3, so re-add 4+3=7. Well, you are not done, So re-add 4+7=11.
I will show this in full - it is then checked with the two 6-digit numbers on the LR bottom of ticket. You get the next number by adding the first to second number, then to get last number, you add first, 2nd and last together, then if two digits, add together until you get ONE digit, then RE-ADD all three numbers - this is exactly where the 'alt-value' works. So, as a quick ex: if you have 4 then 2, then you get 3 for your last number? To get the actual third number, you add 4+2+3=9 and THAT is the third number for your Pick 3. I'll show this later. Gotta go. It works, friends. And, the best thing, is you (when removing the two digits in the DRAW number, you get the TOTAL for that Pick 3, so you will know if you have the right numbers!!!
Later, hope this helps, try it - but I will post an example where I won and show all of this. (also, 2-3 nights ago, I missed by one number, but did get the total of 4, which paid very good - once you KNOW the total, and it is a good paying low or high total? You can simply play 5-10-20 bucks on the total and come out great - but you should be able to add carefully and get all three numbers. Then play 3 EX/ANY in various order and you got it.
Cash Crown.
"People who have never created Progress only see one word - Impossible."
"People who create Progress see TWO words - "Im Possible". TWO one."
<Moved to Lottery Systems forum>
Please post in the appropriate forum ... thank you.
This is not that uncommon and does not really mean anything. When doing these sort of after the fact analysis
we can often find the numbers. This is true for almost any type of analysis and is not limited to addresses zips
etc... It's easy to make connections after the drawing, the hard part is doing it before the game. In the past
I use to play the day of the month for one of my numbers. This gives 28 to 31 numbers throughout the month
and it's a sure bet that several of these will show and even more so for the smaller games.
RL
....
All well and good AFTER THE FACT. Doesn't help much now unless....... all the winning tickets showed this same pattern.....or
you know the zip code and address that will win tonight? Or next week? That would be great.
Crap....looking at bills.
Gas bill is $18.25. So half of 18 = 9 +2+5 = 16!!!
The zip is 77849. Yep! You guessed it 35!!!!!!
And the transposed address 12305 = 11!!!!!
My new truck plates are 672......6+7-2 = 11!!!!!
This system is looking great!
Well it was. Electric bill ruined it $157.22. Equals 17.
Maybe if I add my phone number, subtract my SSN, multiply by my DL and add my address. And then we take that number and use it with the draw number on the ticket, divided by the number of tickets sold plus the store identifier and routing number.
Now wait. Only one more step. We divide all these numbers by Pi. Rearrange the order, get all the mirrors, and make wheels.
AFTER the fact, it don't help. Out on the highway you can find license plates that hit in Pick3 this week, maybe even Pick4. Population signs, highway numbers, mile markers, school bus numbers, billboards, etc. Go down any aisle in Kroger on Legacy and look through the expiration on the can goods, prices in the produce section, try the dairy and meat departments........
Yeah, it's nice, but how does that help predict future drawings?
My greatest accomplishment is teaching cats about Vienna Sausage. When I need a friend, all I need do is walk outside, pop open a can, and every little critter in the neighborhood drops by to say "Hi!"
Interesting that 2 QP's won a starting $200,000 jackpot. One in Houston, the other in Waco. There were only 408,219 tickets sold for that draw..
|
https://www.lotterypost.com/thread/294750
|
CC-MAIN-2017-17
|
refinedweb
| 1,799
| 81.02
|
#include <DerivStencil.H>
DerivStencil is meant to be used to encapsulate the information necessary to take finite difference derivatives at a point in space. Every point in the stencil has a weight. You add them (the boxarrayindex and the weight) at the same time and you can manipulate the weights enmasse by real number operations. Stencils may not interact with each other with the same sort of arithmetic because that would bring up issues as to what to do when there is incomplete intersection between the stencils.
default constructor; creates empty vectors
copy constructor; sets *this = a_dsin
make derivstencil empty
return length of vectors
get iv at ivec
get weight at ivec
assignment operator
Multiply each weight by a_facin does nothing if vectors are of zero length
divide each weight by a_denom does nothing if vectors are of zero length. Be advised --- This function does no checking to see if a_denom is close to zero.
add a_facin to each weight does nothing if vectors are of zero length
subtract a_facin from each weight does nothing if vectors are of zero length
Referenced by isDefined().
|
http://davis.lbl.gov/Manuals/CHOMBO-RELEASE-3.2/classDerivStencil.html
|
CC-MAIN-2017-22
|
refinedweb
| 184
| 53.44
|
table of contents
other versions
- buster 4.16-2
- buster-backports 5.04-1~bpo10+1
- testing 5.07-1
- unstable 5.07-1
other languages
NAME¶stpcpy - copy a string returning a pointer to its end
SYNOPSIS¶
#include <string.h>
char *stpcpy(char *dest, const char *src);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
stpcpy():
- Since glibc 2.10:
- _POSIX_C_SOURCE >= 200809L
- Before glibc 2.10:
- _GNU_SOURCE
DESCRIPTION¶The stpcpy() function copies the string pointed to by src (including the terminating null byte ('\0')) to the array pointed to by dest. The strings may not overlap, and the destination string dest must be large enough to receive the copy.
RETURN VALUE¶stpcpy() returns a pointer to the end of the string dest (that is, the address of the terminating null byte) rather than the beginning.
ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶ThisThis function may overrun the buffer dest.
EXAMPLE¶For); }
|
https://manpages.debian.org/testing/manpages-dev/stpcpy.3.en.html
|
CC-MAIN-2020-34
|
refinedweb
| 162
| 68.16
|
18 October 2007 11:19 [Source: ICIS news]
?xml:namespace>
The source did not specify the original schedule for the shutdown at the unit, which has a propylene capacity of 200,000 tonnes/year and is the first of its kind in southeast Asia. Impact on the downstream polypropylene (PP) line had been minimal, the source added.
This is not the first maintenance shutdown experienced at the complex, also known as an olefins conversion unit, which first came on stream in May last year.
PCS is a joint venture company of oil major Shell (50%) and Japan-Singapore Petrochemicals (50%), in which Sumitomo Chemicals is a major
|
http://www.icis.com/Articles/2007/10/18/9071041/pcs-plans-metathesis-unit-restart-in-one-week.html
|
CC-MAIN-2015-06
|
refinedweb
| 107
| 59.23
|
This is the mail archive of the pthreads-win32@sources.redhat.com mailing list for the pthreas-win32 project.
>> Since the primary purpose of this library is to >> enhance portability of programs using pthreads, >> counting on pthread_exit() or pthread_cancel() >> to work in a non-portable way seems self-defeating. Well, you are certainly allowed NOT to use pthread_exit() and pthread_cancel() in your C++ programs. As long as implementation invokes C thread cleanup handlers *IN C* programs IT IS CONFORMING. Why do you care how pthread library for C++ implements it wrt to C++ stack unwinding? As long as you are not using it (_cancel/_exit) that should not be an issue for you. > This is exactly the reason why i have created the > setjmp/longjmp version. I am really puzzled. Since there is no standard PTHREAD C++ bindings that would guarantee things such as C++ tack unwinding on thread exit/ cancellation, any C++ threaded written to exploit such things is NOT truly portable. However, there is practically no way to make thread cancellation/ exit work in C++ programs using setjmp/longjmp because the C++ standard restricts the LEGAL usage of longjmp in *C++* programs: ISO/IEC 14882:1998(E), Pg 347: ." ^^^^^^^^^^^^^^^^^^ regards, alexander. Thomas Pfaff <tpfaff@gmx.net>@sources.redhat.com on 12/20/2001 10:25:14 AM Sent by: pthreads-win32-owner@sources.redhat.com To: "Pthreads-Win32@Sources.Redhat.Com" <pthreads-win32@sources.redhat.com> cc: Subject: Re: pthreads VCE: problem with destructor On Wed, 19 Dec 2001, reentrant wrote: > > Due to the nature of the beast just as the responsibility lies with the > programmer to design the program to "cleanly" (including running > destructors, > ad nauseum) exit when exit() is called, it should also be the > responsibility of > the programmer to design a thread to cleanly exit with pthread_exit() or > pthread_cancel() are called. Just as exit() should not be called in a > C++ > program if dtors are desired to run neither should pthread_exit() from a > thread. IMHO. > > I would imagine that exit() was chosen not to be modified to account for > running C++ destructors for about the same reasons that pthread_exit() > should > not account for running C++ destructors. Dtors not being called in > these cases > is and should be expected behavior. > > Regardless as Mr. Bossom so well has already stated: I certainly > wouldn't > depend on pthread_exit() or pthread_cancel() allowing destructors to run > to be > portable though. Since the primary purpose of this library is to > enhance > portability of programs using pthreads, counting on pthread_exit() or > pthread_cancel() to work in a non-portable way seems self-defeating. This is exactly the reason why i have created the setjmp/longjmp version. There may be bugs in it but i think they could be discovered if more would using it. > While attempting to allow dtors to be run upon a pthread_exit() or > pthread_cancel() is certainly a noble goal, it's beyond the scope of the > pthread library. It's the programmer's responsibility IMHO. > > So, as I hope is obvious by this point :), I am completely in support of > the > "nasty hacks" being removed and a clean C interface version of the > library > being provided only. Try the GC stuff and report bugs. > > My two cents, > Dave > > > --- Ross Johnson <rpj@ise.canberra.edu.au> wrote: > >)? No. The only thing you need is to define __CLEANUP_C to avoid the default handling in pthread.h. /* * define defaults for cleanup code */ #if !defined( __CLEANUP_SEH ) && !defined( __CLEANUP_CXX ) && !defined( __CLEANUP_C ) #if defined(_MSC_VER) #define __CLEANUP_SEH #elif defined(__cplusplus) #define __CLEANUP_CXX #else #define __CLEANUP_C #endif #endif These defaults have been added to be backward compatible and it is time to remove them. > > > > > Regards, Thomas
|
http://sourceware.org/ml/pthreads-win32/2001/msg00156.html
|
CC-MAIN-2018-30
|
refinedweb
| 609
| 53.1
|
using simple timers in ROS - doesn't show up in rqt [closed]
I am on ROS Hydro (with Catkin). I made a node with a C++ timer, with the following code;
#include <iostream> #include <ctime> #include <cstdlib> using namespace std; int main(int args,char* argv[]) { clock_t startTime = clock(); //Start timer double secondsPassed; bool flag = true; while(flag) { secondsPassed = (clock() - startTime) / CLOCKS_PER_SEC; if(secondsPassed >= 120) { std::cout << "Hello ROS, this is a C++ node and " << secondsPassed << " seconds have passed" << std::endl; flag = false; } } }
This works just fine ! but the funny part is that it doesn't show up on rqt. If the node is running for 120 seconds, it should register in rqt for those 120 seconds - which it doesn't ! Any hints ? explanation ? or am I doing something wrong ?
|
https://answers.ros.org/question/175170/using-simple-timers-in-ros-doesnt-show-up-in-rqt/
|
CC-MAIN-2021-25
|
refinedweb
| 130
| 72.29
|
Closed Bug 1414390 Opened 4 years ago Closed 3 years ago
Introduce a pref to store BCP47 locale list
Categories
(Core :: Internationalization, enhancement, P3)
Tracking
()
mozilla59
People
(Reporter: zbraniecki, Assigned: zbraniecki)
References
(Blocks 2 open bugs)
Details
Attachments
(1 file)
As we're closing down on the remaining uses of general.useragent.locale, it's time to start planning its replacement. I'd like to introduce a new pref which will: - store a list of locales - preferably as BCP47 language tags - probably using `intl.locale` pref namespace The first idea is to use `intl.locale.requested`, but I'm open to discuss other ideas. One reason to look for something more specific is that if in the future we'll want to introduce multiple requested lists (say, requested locales for Firefox, separate for requested locales for DevTools) we may have to figure out how to make it more specific. Maybe: - intl.locale.requested - intl.locale.requested.devtools is enough?
Pike, Jonathan - do you have any preference?
Flags: needinfo?(l10n)
Flags: needinfo?(jfkthame)
(In reply to Zibi Braniecki [:gandalf][:zibi] from comment #0) > Maybe: > - intl.locale.requested > - intl.locale.requested.devtools > > is enough? That seems reasonable to me. It provides a global/default value, but can easily be extended with more-specific overrides for .devtools and for any other components that might someday need to be configured separately. Maybe should be "intl.locales.requested" (note the "s") to reinforce the fact that it is a list, not just a single locale.
Flags: needinfo?(jfkthame)
Related question first, how do you anticipate this to work with matchOS?
Flags: needinfo?(l10n) → needinfo?(gandalf)
Flags: needinfo?(gandalf)
> Related question first, how do you anticipate this to work with matchOS? Not sure yet. One option would be to do what Jonathan suggested when we designed LocaleService - empty requested means "use OS". Alternatively, we could keep the matchOS as is, and consider adding `intl.locale.matchOS.devtools` etc. Or reverse it and do: intl.locale.devtools.requested intl.locale.devtools.matchOS I'm not sure which way to go here, just brainstorming. > Maybe should be "intl.locales.requested" (note the "s") to reinforce the fact that it is a list, not just a single locale. I thought we're aiming to make the namespace of preferences be "tree-like", as in "intl.*" is all Intl related, "intl.locale.*" is all intl/locale related? If that's the case then having `intl.locales.requested` and `intl.locale.matchOS` would be confusing, no?
Flags: needinfo?(gandalf)
Do you have a suggestion?
Flags: needinfo?(l10n)
Empty being matchOS sounds good to me. We'll also need to handle users, i.e., junk being manually entered in about:config.
Flags: needinfo?(l10n)
Assignee: nobody → gandalf
Status: NEW → ASSIGNED
Comment on attachment 8930189 [details] Bug 1414390 - Add intl.locale.requested locale list to replace general.useragent.locale. Here's the WIP for this patch. Jonathan, Axel: can you skim through to verify that the code looks reasonable? Few things noteworthy: - I am removing general.useragent.locale|intl.locale.matchOS - originally I thought about trying to handle both, but with landing of bug 1337078 in 58, we no longer have any place where codebase would operate via manipulating the old pref, and this is much cleaner now. - I removed the intl.preferences override. I looked through all values localizers set there, and it should all be correctly captured by the language negotiation and LikelySubtags. The only exception is the `ja-JP-mac` which I provided a workaround for by turning it into `ja-JP-macos`. It would be nice to change that in our naming structure ("ja-JP-mac" -> "ja-JP-macos") which would allow us to remove all the specialcasing, but for now I believe it'll work well.
Comment on attachment 8930189 [details] Bug 1414390 - Add intl.locale.requested locale list to replace general.useragent.locale. C/C++ static analysis found 1 defect in this patch. You can run this analysis locally with: `./mach static-analysis check path/to/file.cpp` ::: intl/locale/LocaleService.cpp:988 (Diff revision 2) > - Preferences::ClearUser(SELECTED_LOCALE_PREF); > - } else { > - Preferences::SetCString(SELECTED_LOCALE_PREF, aRequested[0]); > + nsAutoCString locale(aRequested[i]); > + SanitizeForBCP47(locale); > + if (locale.EqualsLiteral("und")) { > + NS_ERROR("Invalid language tag provided to SetRequestedLocales!"); > + return NS_ERROR_INVALID_ARG; > + } else if (locale.EqualsLiteral("ja-JP-x-lvariant-mac")) { Warning: Do not use 'else' after 'return' [clang-tidy: readability-else-after-return]
Comment on attachment 8930189 [details] Bug 1414390 - Add intl.locale.requested locale list to replace general.useragent.locale. Seems reasonable to me. It would be nice to have some actual tests for how this works with multi-locale builds, etc., which I don't see here; but maybe we're not really set up to be able to do that at this point.
Attachment #8930189 - Flags: feedback?(jfkthame) → feedback+
> Seems reasonable to me. It would be nice to have some actual tests for how this works with multi-locale builds, etc., which I don't see here; but maybe we're not really set up to be able to do that at this point. So, this patch doesn't directly influence our builds. It only liberates us to store multiple requested locales, rather than one. In bug 1410923 I started brainstorming ways to further extend this to build system, and even with that, we're still not talking about multilocale as in `MOZ_CHROME_MULTILOCALE="de it" ./mach package` yet, just that if you do `./mach build installers-sr-RU` we can package more than just sr-RU at build time. So, with this patch landing all that changes is that: a) we now will be able to work on the build side to store list of locales in `intl.locale.requested` b) at runtime LocaleService::SetRequestedLocales can set a fallback list The longer lists are covered in NegotiateLanguages tests and in bug 1414899 I'll do more manual testing for scenarios such as langpack "fr", build with "it" and "en-US" which is the closest we can now get to multilocale builds. I also added a test where I do SetRequestedLocales to a longer chain and check if GetRequestedLocales returns it. Is that enough? I'll add more tests to verify that we remove bogus language tags (since `intl.locale.requested` is sanitized).
Comment on attachment 8930189 [details] Bug 1414390 - Add intl.locale.requested locale list to replace general.useragent.locale. The whole jazz about -mac vs -x-lvariant-mac vs -macos needs more comments. ja-JP-macos is well-formed, but not valid. No idea if that's good or bad. The comments in the code should also use strict language here, "well-formed" or "valid", not compliant (or typos thereof ;-) ) Not sure if the code actually ends up using -macos at all? I don't see it in the code. Generally speaking, the broad idea of using one pref instead of two, and removing the localized pref is good. I wonder, could we even remove the pref for single-locale builds from browser/firefox-l10n.js and the mobile equiv?
Attachment #8930189 - Flags: feedback?(l10n) → feedback+
Yes, we can remove them and end up with three state system: - missing prop means "take default locale and add last fallback" - empty prop means "take OS locales and add last fallback" - filled prop means "parse the list, sanitize it, add last fallback" Does it sound good? I'll document the hell out of the specialcase :) Why do you say "ja-JP-macos" is not a valid bcp47? It matches the [5-8] alphanum variant subtag just like "ja-JP-windows"
Per: A tag is considered "valid" if it satisfies these conditions: o The tag is well-formed.. Given that "macos" isn't in the IANA Language Subtag Registry, ja-JP-macos is not valid. (Nor is ja-JP-windows)
Thank Axel! Ok, the patch is ready and I switched to use well-formed :) I also added a bit of code that I'd like to keep through the 59 cycle that migrates old-style prefs to the new ones.
Notice: I am not removing the mobile/android/locales/mobile-l10n.js in this patch to keep it smaller. The file just becomes empty and we can remove it in a separate bug since it touches build system more than anything else.
Comment on attachment 8930189 [details] Bug 1414390 - Add intl.locale.requested locale list to replace general.useragent.locale. ::: intl/locale/LocaleService.cpp:78 (Diff revision 6) > - nsAutoCString locale; > - > - // First, we'll try to check if the user has `matchOS` pref selected > + // Temporary section to transition old style requested locales preferences > + // to the new model. > + // XXX: To be removed in Firefox 60 It seems sad to have C++ code for this; can you do it in the `_migrateUI` method of BrowserGlue? ::: intl/locale/LocaleService.cpp:84 (Diff revision 6) > - // If he has, we'll pick the locale from the system > - if (OSPreferences::GetInstance()->GetSystemLocales(aRetVal)) { > - // If we succeeded, return. > - return true; > + nsAutoCString requestedValue; > + if (Preferences::GetBool(MATCH_OS_LOCALE_PREF)) { > + requestedValue.AssignLiteral(""); > + } else { > + nsAutoCString str; > + if (NS_SUCCEEDED(Preferences::GetCString(SELECTED_LOCALE_PREF, str))) { > + if (SanitizeForBCP47(str, true)) { > + requestedValue.Assign(str); > + } > - } > + } > - } > + } > > - // Otherwise, we'll try to get the requested locale from the prefs. > - if (!NS_SUCCEEDED(Preferences::GetCString(SELECTED_LOCALE_PREF, locale))) { > - return false; > + Preferences::ClearUser(MATCH_OS_LOCALE_PREF); > + Preferences::ClearUser(SELECTED_LOCALE_PREF); > + Preferences::SetCString(REQUESTED_LOCALES_PREF, requestedValue); > } I'm seeing some crazy-looking indentation here -- please de-tab! (And tweak your editor configuration so it never inserts hard tab characters?) ::: intl/locale/LocaleService.cpp:86 (Diff revision 6) > - bool matchOSLocale = Preferences::GetBool(MATCH_OS_LOCALE_PREF); > - > - if (matchOSLocale) { > - // If he has, we'll pick the locale from the system > - if (OSPreferences::GetInstance()->GetSystemLocales(aRetVal)) { > - // If we succeeded, return. > + if (Preferences::HasUserValue(MATCH_OS_LOCALE_PREF) || > + Preferences::HasUserValue(SELECTED_LOCALE_PREF)) { > + > + nsAutoCString requestedValue; > + if (Preferences::GetBool(MATCH_OS_LOCALE_PREF)) { > + requestedValue.AssignLiteral(""); No need for this assignment, `requestedValue` will start out as an empty string anyway. ::: intl/locale/LocaleService.cpp:88 (Diff revision 6) > + nsAutoCString str; > + if (NS_SUCCEEDED(Preferences::GetCString(SELECTED_LOCALE_PREF, str))) { > + if (SanitizeForBCP47(str, true)) { > + requestedValue.Assign(str); Using a separate string here seems like excessive copying. How about just reading the pref directly into your `requestedValue` var, and then if Sanitize says it fails, truncate it back to empty?
Moved the conversion to the nsBrowserGlue and that fixed all the following issues :)
Comment on attachment 8930189 [details] Bug 1414390 - Add intl.locale.requested locale list to replace general.useragent.locale. LGTM, though maybe you should also have a more front-end-ish person sign off on the nsBrowserGlue version of the migration code, as I don't really speak that language. :\
Attachment #8930189 - Flags: review?(jfkthame) → review+
Comment on attachment 8930189 [details] Bug 1414390 - Add intl.locale.requested locale list to replace general.useragent.locale. Dave, can you review the migration code pls? (I tested it manually as well)
Attachment #8930189 - Flags: review?(dtownsend)
Comment on attachment 8930189 [details] Bug 1414390 - Add intl.locale.requested locale list to replace general.useragent.locale. ::: browser/components/nsBrowserGlue.js:2230 (Diff revision 7) > + } catch (e) { /* Don't panic if the value is not a valid locale code. */ } > + } > + } > + Services.prefs.clearUserPref(SELECTED_LOCALE_PREF); > + Services.prefs.clearUserPref(MATCHOS_LOCALE_PREF); > + } Should probably notify the Thunderbird/Seamonkey folks that they need to implement something like this too.
Attachment #8930189 - Flags: review?(dtownsend) → review+
Thanks Dave! Jorg - I'm testing the platform with the patch, but would like to land it this week to give it a proper time to bake before merge day. Let me know if you want me to delay the landing to get the migration code landed into Thunderbird/SM.
Flags: needinfo?(jorgk)
Merge day is in January. So what do we need to add, the migration code? That doesn't appear too hard.
Flags: needinfo?(jorgk)
yeah, just the migration code. Ok, then I'll land it as soon as I'm done with testing scenarios and you can land the migration code right after that. Sounds good?
Yep. But nothing severe happens without the migration code, right?
No, just that users who're using langpack get reset to their packaged locale and need to select the langpack locale again (using the new `intl.locale.requested` pref). Since langpacks are not usable really on Nightly, it shouldn't affect the nightly population.
Pushed by zbraniecki@mozilla.com: Add intl.locale.requested locale list to replace general.useragent.locale. r=jfkthame,mossop
Status: ASSIGNED → RESOLVED
Closed: 3 years ago
status-firefox59: --- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla59
Hi folks, thanks for making this happen. One tought about merging this into TB + SM. Could you please fix the "intl.locale.matchOS" stuff on them too? Thank you. Nick
Since it seems there's no migration from profiles with general.useragent.locale set (i've had users losing their ui language on 59.0 and i experienced it in 60.0b3), shouldnt this have been mentioned in the relnotes ? Or something special has to be done for distributors with a distribution.ini file where langpack xpi are installed systemwide separately, like setting intl.locale.requested="" ?
(In reply to Landry Breuil (:gaston) from comment #36) > Since it seems there's no migration from profiles with > general.useragent.locale set (i've had users losing their ui language on > 59.0 and i experienced it in 60.0b3), shouldnt this have been mentioned in > the relnotes ? I'm starting to think this would be a good idea, even if it's quite late SUMO article has been updated in the meantime
I've fixed general.useragent.locale in distribution.ini for Firefox 60 -. Should I handle intl.locale.matchOS as well?
Is this part of distribution.ini? I don't know what's the scope there. We do the same migration for that as we do for `general.useragent.lcoale` -
Yes, ther are Linux (and other) distros that set intl.locale.matchOS in distribution.ini. So the browser glue code wouldn't migrate it.
Ugh. I would love if any distribution style code used LocaleService API rather than setting prefs around, since that seems to be inherently flawed. Sure, I didn't come across any bug reports about it from Linux distros, so maybe they did beta-test? Idk.
|
https://bugzilla.mozilla.org/show_bug.cgi?id=1414390
|
CC-MAIN-2021-17
|
refinedweb
| 2,344
| 50.53
|
RationalWiki:Saloon bar/Archive46
Contents
- 1 I saw a sign
- 2 Twilight
- 3 RationalWiki Awards
- 4 WARNING: Dangerous Reptiles!
- 5 Tool using octopus
- 6 Anyone got good OCR software?
- 7 Brief rant
- 8 Public Image Ltd
- 9 An disturbing Urban Legend is apparently confirmed.
- 10 Japan
- 11 Words
- 12 Sad to see people dumped on Avatar here
- 13 merry holiday!!
- 14 NORAD tracks Santa
- 15 I will be go to hell
- 16 3AM 25 Dec 2009
- 17 National Geographic
- 18 We have another Republican to hit golf balls at
- 19 Festive logo
- 20 Must we vote on everything?
- 21 Six days left and I'm still uncomfortable calling them the "ohs", "aughts" or "noughties". How about you?
- 22 Corporate/political Christmas cards
- 23 Yule goat
- 24 How did we fare out?
- 25 Pussy power
- 26 Let's fucking condemn this place!
- 27 The "Cloward-Piven strategy"--anyone familiar with this?
- 28 'Tis the season....
- 29 What the fuck happened?
- 30 NephilimFree
- 31 Found on FoxNews.com comment section.
- 32 Back
- 33 Anyone read/heard of this dreadful sounding thing?
- 34 Brown like that
- 35 Tea partiers
- 36 Positive Woo?
- 37 Request - Acrobat
- 38 Wikileaks down
- 39 Roman Tax Collection
- 40 15 Most Heinous Climate Villains
- 41 New Years hurrah
- 42 Happy New Year
- 43 Avatar Rehash
- 44 What Darwin Never Knew: A Creationist's Nightmare
- 45 I now hate Disney
- 46 Hovind's Thesis Redux
- 47 God damn (rant below)!
- 48 Wikipedia has an SPOV, too!
- 49 Fairwell internet explorer!
- 50 Ethical dilemna - please vote/opine
- 51 Who is More Admired?
- 52 Battle Royale
- 53 Ben Stein Commits the Greatest Sin Imaginable for a Republican...
- 54 Dan Brown rehash
- 55 Opportunity knocking
- 56 Greatest Liberal Films of all time
- 57 A fun story
- 58 PSU
I saw a sign[edit]
I had to take a leak at the grocery today and they had a hilarious (though unintentionally so) sign in the bathroom. It said, "Please flush only toilet paper down the toilet". I ended up wondering what to do with my shit.--Thanatos (talk) 03:53, 19 December 2009 (UTC)
- Roll it up into little balls... ħuman 04:27, 19 December 2009 (UTC)
- The Bible offers a tip. Ezekiel 4:12 "And thou shalt eat it as barley cakes, and thou shalt bake it with dung that cometh out of man, in their sight." Yum! --Ask me about your mother 16:32, 19 December 2009 (UTC)
- Had it been me, I've just left it sitting in the toilet, but that' not a funny solution, is it? Gooniepunk2010 Oi! Oi! Oi! 13:45, 20 December 2009 (UTC)
- There's always the upper deck... Corry (talk) 03:46, 22 December 2009 (UTC)
- In a grocery store? Shit, man, I already worry about what's on their employees' hands. Tetronian you're clueless 03:49, 22 December 2009 (UTC)
Twilight[edit]
I just read this series. Absolutely terrible in every way: technically, dramatically, and ethically. I cannot believe parents let their kids read these, much less encourage it.--Tom Moorefiat justitia 01:35, 20 December 2009 (UTC)
- "Ethically"? ListenerXTalkerX 02:31, 20 December 2009 (UTC)
- I quote myself when I say that "Bella needs a man in her life or else she can't function. She may have inherited this from her mother, who on the first page of the novel is depicted as being completely incapable, but who will be okay now that she is remarried to her second husband. In the same way, Bella's life completely revolves around having a man in her life, and she can't exist otherwise." You can read the linked post for a bit more discussion.--Tom Moorefiat justitia 02:41, 20 December 2009 (UTC)
- Ah, I see. The books flout the party line about women being "independent," celibate, etc. As to your criticism of the writing, channel Harold Bloom on Harry Potter much? ListenerXTalkerX 02:50, 20 December 2009 (UTC)
- Wait, you forgot to blame the reds! Or is "party line" code for "commies"? ħuman 03:09, 20 December 2009 (UTC)
- I don't think I ever read a Bloom critique of Rowling. I actually thought that Rowling was fairly technically adept. I do think she overuses modifiers; no one can ever gaze, they're always calmly gazing or feverishly gazing. But she in any case Rowling is head and shoulders above Meyers.--Tom Moorefiat justitia
- Here is the article in question. And her name is rendered Stephenie Meyer, not Stephanie Meyers; I might write articles in tiresome narrative form, but at least I know how to spell. ListenerXTalkerX 03:16, 20 December 2009 (UTC)
- Hehe, Bloom, he's such a pedantic troll. I bought one of his books once because I needed a doorstop. Yes, he's a well-read, intelligent man. He's also an asshole and a sexist. ħuman 03:32, 20 December 2009 (UTC)
- You're right Listener, I misspelled her name above, although I don't know why you have to be a jackass to me about it. I didn't say anything about things you've written. I think you're confused, by the way: that article is not a critique of Rowling.--Tom Moorefiat justitia 03:34, 20 December 2009 (UTC)
- I do not think much of Prof. Bloom either. The Boston Globe article was not focused on Ms. Rowling, but insofar as it did mention her it contained complaints about her writing in the same vein as above.
- TomMoore, if you are going to beat Ms. Meyer over the head for poor writing skills, I put that you have lost all license to ask that you not be criticized for any shortcomings in that area. And note also that in my remarks about my own writing skills, or lack thereof, I am quoting you directly. ListenerXTalkerX 03:58, 20 December 2009 (UTC)
- I didn't ask you not criticize me. I just think you're being a jackass about a single error. I did not consistently call her by the wrong name. And even if I had, such jackassery is uncalled for.
- It seems, though, that I hurt your feelings in the past. I'm sorry for that. I was probably too harsh; I get carried away easily. Please accept my apology.
- So you think Meyer is a good writer and the series is well-done?--Tom Moorefiat justitia 04:05, 20 December 2009 (UTC)
- I have not read it and cannot comment, but the phrase "one-dimensional characters" is used so often with regard to fantasy (including classics in the field) that it has almost become a cliche.
- As a grammar-Nazi I welcome all sorts of criticism of my writing, and do not get hurt feelings about it, so there is no need to apologize. ListenerXTalkerX 04:18, 20 December 2009 (UTC)
- I think part of the problem is he was talking about Meyers' ability to write effective fiction, and you reply by attacking a spelling mistake. Hardly one-for-one. --Kels (talk) 04:21, 20 December 2009 (UTC)
-
- Now, where have I seen that tactic before? Lily Inspirate me. 10:06, 20 December 2009 (UTC)
- You dawg, I heard you like Bloom so I put Bloom in your Twilight so you could read while you read. Bitches. --User:Theautocrat/Sig 04:39, 20 December 2009 (UTC)
- I've never read it, but from what I can gather, it's pretty much on par (or below) Ann Rice's trick of "oh, it's so lonely being a vampire. I'm so lonely. Woe. Woe... WWWWWWOE, I tell you!!!!!!!", which I'm sure appeals to some people, particularly those fond of teenage wangst. sshole 09:59, 20 December 2009 (UTC)
Twilight sucks, and so do Twilight fans. Need I say more? AnarchoGoon Swatting Assflys is how I earn my living 13:55, 20 December 2009 (UTC)
- My Life Is Twilight.--Tom Moorefiat justitia 14:10, 20 December 2009 (UTC)
- *shudder* Tetronian you're clueless 14:14, 20 December 2009 (UTC)
- Lots of things suck to those who do not like them. Although we may disagree, people have a right to their own preferences no matter how banal or repulsive we may find them. I can't say that I ever cared for punk. It's when people become so obsessive about a book, film or band so that it takes over their life that I find it odd. Lily Inspirate me. 14:46, 20 December 2009 (UTC)
- Kevin Smith knows where it's at. -- YossarianThe Man from the USSR 08:19, 22 December 2009 (UTC)
RationalWiki Awards[edit]
RWW is considering closing the voting soon, so get your votes in while you still
care can. Totnesmartin (talk) 12:04, 20 December 2009 (UTC)
- These awards you speak of - where are they again? Lost the link. DogPMarmite Patrol 14:41, 20 December 2009 (UTC)
- yer 'tiz, buyh. Totnesmartin (talk) 16:18, 20 December 2009 (UTC)
- I'll have to start being more of an asshat so I stand a chance of winning one next year. sshole 19:12, 21 December 2009 (UTC)
- I'm not winning the Most authoritarian category? What's wrong with you people? -- Nx / talk 19:19, 21 December 2009 (UTC)
- But you are winning "technobrat", which I'd take as an "honourable mention" in the authroitarian category. sshole 19:22, 21 December 2009 (UTC)
- It would seem, Nx, that while a few outspoken people accuse you of this, the vast (nearly silent) majority do not see you in this light. Rejoice! (And can you put the hat back.)--BobNot Jim 20:15, 21 December 2009 (UTC)
- I've been toying with the idea of adapting the assfly thing in to an angry feedback generator. It might you help you next year Nx. In the meantime, I suggest you slam your nuts in a drawer before replying to anything. Works wonders for authority, but If I were you I'd have a some kids before you try that. --Ask me about your mother 20:22, 21 December 2009 (UTC)
WARNING: Dangerous Reptiles![edit]
So, I have owned a pet red-eared slider turtle for about 15 years now, and have never been quite positive of the gender. Therefore, i named it Shelly, so that the name could be somewhat gender-neutral and turtle related. 6 years ago, I met a a lady at the local herptological society that has been taking care of reptiles for 30 years who told me that, because of my turtle's long claws, she was "99 percent certain" that my turtle was a male. Therefore, i changed my turtle's name to Sheldon instead. Well, I had put my turtle in its feeding tank last night and, 6 hours later, came back to find that my turtle, who I thought was a male, had laid eggs in its feeding tank. Ergo, I re-named it Shelly again, and now have a funny reptile story to tell people. AnarchoGoon Swatting Assflys is how I earn my living 02:52, 21 December 2009 (UTC)
- Cool, and some breakfast for you too. CrundyTalk nerdy to me 10:05, 21 December 2009 (UTC)
- A few years ago I had a turtle calamity where the turtles that live in the lake in my backyard apparently went wild and decided to plant their eggs in various parts of my freshly tilled garden. Turtles, when provoked, can cause a surprisingly amount of damage,albeit in a very limited vertical swath. Ever since that day I've sworn a vendetta against turtles. It's a low key vendetta and I've haven't really done anything about it yet, but it is a vendetta none the less. One day, some how I'll extract my revenge on turtle kind, Sheldon or Shelly or whatever be damned! Me!Sheesh!Mine! 21:02, 21 December 2009 (UTC)
- My grandmother's "boyfriend" (I dunno, live-in partner?) had three tortoises, 2 male one female. The female was more than 100 years old, and suddenly one year she laid eggs (after getting raped by the other two for years, after all, they can't run that fast) which hatched, and then laid more eggs the next year which also hatched. Made the local papers (something about a 100-year old giving birth, with a pic of my, erm, Step-grandad?). CrundyTalk nerdy to me 21:44, 21 December 2009 (UTC)
Tool using octopus[edit]
For those who are into such things, this is a video of an octopus apparently using a tool.--BobNot Jim 13:20, 21 December 2009 (UTC)
- Beats the hell out of cow tools, don't it? --Kels (talk) 18:44, 21 December 2009 (UTC)
- I think dolphins also use shells and coral as digging tools IIRC. sshole 19:08, 21 December 2009 (UTC)
- It seems like just yesterday that I saw some dolphins on TV, using a piece of seaweed as a toy. Sprocket J Cogswell (talk) 19:35, 21 December 2009 (UTC)
- My cats use my wife and I as tools for opening cans. 20:53, 21 December 2009 (UTC)
- Shouldn't that be "my wife and me"? Professor Moriarty 21:01, 21 December 2009 (UTC)
- Yes. ListenerXTalkerX 21:02, 21 December 2009 (UTC)
- Seagulls use gravity and big rocks to smash open crabs. Messy but effective. ħuman 21:38, 21 December 2009 (UTC)
- Kels, nothing beats cow tools. -- YossarianThe Man from the USSR 07:53, 22 December 2009 (UTC)
Anyone got good OCR software?[edit]
I was thinking maybe we should OCR the scanned version of Hovind's Dissertation to have an annotated side by side (by chapters/paragraphs, of course; it's 100+ page of stuff I can't expect anyone would have the time to type it all up) somewhere. ThiehZOYG I edit like Kenservative! 12:07, 22 December 2009 (UTC)
Brief rant[edit]
So I had a patient today who was told by her school nurse that diet soft drinks have more sugar than regular soft drinks, and that the manufacturers lie on the nutrition label. I told her that her school nurse is wrong and that she should tell her as much. It amazes me how people that are supposed to be medical professionals can be so ignorant. With so many people being obese, anybody in preventative medicine should be knowledgeable about diet and nutrition and not spread around stupid bullshit like this. Idiot. (the school nurse, not my patient) Corry (talk) 04:37, 22 December 2009 (UTC)
- While I'm skeptical of the claim, I have heard from more than one person that diet sodas are counterproductive in achieving weight-loss; as much or more so than regular sodas. This is of course not the same as having more sugar (which my diabetic friend, if nothing else, could quite easily contradict in his insulin measurements), but that might perhaps be what your patient was referring to, mixing up what the nurse actually said. (On a side note, the fattest guy I know drinks diet soda by the truckload, but that's easily one of those chicken/egg things.) I'll see what I can find about that claim, at least. DickTurpis (talk) 04:45, 22 December 2009 (UTC)
- Fair point, and we talked about the fact that some studies show that. But she had the notion that the manufacturers publish false info on the nutrition labels. This was a pretty smart girl. Corry (talk) 04:49, 22 December 2009 (UTC)
- If the soda companies were printing false information on their nutrition labels, that would be a crime, and it could be proven, I'd suspect, with just a few minutes in a chemistry lab. It is not something that could be easily concealed, and with the number of food watchdog groups out there, they couldn't keep it up for long. Not to mention the numerous lawsuits from diabetics if they were caught lying about sugar content.
- But then, as for medical professionals giving bad information, my partner was told by the surgeon who removed his gall bladder that broccoli was loaded with cholesterol, when, in fact, broccoli can help reduce cholesterol. (The surgeon was an asshole, too, but that has nothing to do with the bad information.)
- As for overweight people drinking diet soda, I think part of it is a syndrome that could best be summed up by example: "I'll have a Big Mac, a double Quarter Pounder with Cheese, a super-size fries, a twenty piece Chicken McNuggets... and a Diet Coke." People manage to convince themselves that being "good" for one part of the meal makes up for all the "bad" parts. And while I was never a soda drinker, as someone who lost a helluva lot of weight, I know that thought process in general. MDB (talk) 12:05, 22 December 2009 (UTC)
- Yes, it'd be a major crime to outright lie on labels. Being slightly "flexible" with the truth and how you present it, however, isn't so much a crime and is usually judged on a case-by-case basis by Advertising Standards or whatever (the US, if I remember rightly, has far more relaxed laws on this; in Europe they will take your balls for lying to customers, in America it's considered just playing the game). Whether "diet" drinks are actually better for you is a bit of an issue, of course. High sugar content isn't good for you, but the synthetic substitutes designed to have a lower calorific value may also do other harm. "Diet soft drinks have more sugar than regular soft drinks" and "manufacturers lie on the nutrition label" is outright wrong, however. Did the same nurse then try to sell them some vitamin pills or "natural" remedies to go with that advice? sshole 15:44, 22 December 2009 (UTC)
- I'm appalled by the slanderous statements about broccoli. But seriously, how could a doctor think that? It just doesn't make sense: cholesterol isn't found at high levels in anything but animals. And even if this guy has avoided any courses on nutrition or plant biology, why would anyone think that green vegetables, of all things, were high in cholesterol? Broccoli (talk) 17:49, 22 December 2009 (UTC)
Public Image Ltd[edit]
I saw 'em last night at Brixton Academy. A mighty good gig I might add. I took a few phone pics of John Lydon - they came out shit but I will upload them when I get bluetooth working again. SJ Debaser 16:38, 22 December 2009 (UTC)
- You really don't need to go to so much trouble. Honestly, we'll take your word for it. ГенгисRationalWiki GOLD member 19:18, 22 December 2009 (UTC)
I envy you, you crazy bastard! While these guys may not want an upload, I sure as hell do! The Goonie 1 What's this button do? Uh oh.... 02:54, 23 December 2009 (UTC)
- Sounds good to me. I'd be interested. Most of the photos I take at shows are for shit, especially if being jostled around in the pit. Aboriginal Noise with 4 M's and a silent Q 03:29, 23 December 2009 (UTC)
An disturbing Urban Legend is apparently confirmed.[edit]
This is, more then a little disconcerting. Ryantherebel (talk) 16:22, 23 December 2009 (UTC)
Japan[edit]
Just watching NHK telly: midwinter (christmas) program; presenters: Guy in thick coat, girl in micro mini skirt & stockings (admittedly probably quite thick). It's the same the whole world over! I am eating & honeychat 20:56, 23 December 2009 (UTC)
Words[edit]
Repetition of a word can make it totally nonsense: I now can't believe that "collapse" is a real word after following Nx round his edits. (Just sayin') I am eating & honeychat 02:25, 22 December 2009 (UTC)
- Ha ha, I was thinking the same thing except for the word "Forum". Tetronian you're clueless 03:18, 22 December 2009 (UTC)
- So it's not just me that gets into an existential crisis by repeating a word enough times so that it loses all meaning. Read through some of the stuff on Arsebook and it happens with the word "friend" very quickly. Does anyone know why this happens?!? sshole 15:46, 22 December 2009 (UTC)
- Sometimes I will look at a commonplace word and think how weird it is; so I do get where you're coming from, Toast. ГенгисRationalWiki GOLD member 19:21, 22 December 2009 (UTC)
- I have no idea why. Maybe Trent knows....? Tetronian you're clueless 19:23, 22 December 2009 (UTC)
- I'm relieved that it's not just me doing this. I go through phases where the spelling of a word seems totally odd, and I find it difficult to believe that the spelling is correct, even though I know it is. --Ask me about your mother 19:27, 22 December 2009 (UTC)
- May the gods bless the almighty Google and Wikipedia. The phenomenon is called Semantic satiation. Not a brilliant WP article, but at least we now know what it's called. sshole 23:08, 22 December 2009 (UTC)
- What did you google to find that for drake's sake? I am eating & honeychat 23:14, 22 December 2009 (UTC)
- "repeating words loses meaning" or something like that. I find that, with the Internet, the question you ask has probably been asked before. So even if you don't know the specific words or wording, just stick in what you know and have a quick look. 95% of the time it works fine like that. sshole 00:06, 23 December 2009 (UTC)
- Tell me more about this 'Internet' of which you speak - it sounds like a most useful item. Can I get an Internet shipped to me from a catalog store? DogPMarmite Patrol 01:21, 25 December 2009 (UTC)
Sad to see people dumped on Avatar here[edit]
I thought it rocked. Saw it in 3D with the glasses. And TBH looking at the trailer I thought "this mocap cgi will never work, this is blue Jar Jar Binks" and I walked out of the theater thinking the CGI alien performances were more expressive and believable than live action. Despite a below average plot and a well below average script, this movie is... something amazing. Not a movie, an experience? A visceral visual experience. Something like that :) See it. WodewickWelease Wodewick! 09:22, 23 December 2009 (UTC)
- I don't think anyone's criticizing its visual effects. They're criticizing the fact that it sacrifices things like story and theme as a result of focusing on special effects too much. Tetronian you're clueless 13:44, 23 December 2009 (UTC)
- I've not seen it yet, but the impression I got from the trailer was that the entire plot was already in the trailer, like a below-average chick flick, and that the characterisation is all in 2D, despite the rest of the film being 3D. For my winter blockbuster "experience" I think I'm more interested in Sherlock Holmes, as it just looks more fun than Avatar. User:Bondurant14:47, 23 December 2009 (UTC)
- So the question here is, was it a pretty spectacle or a good movie? "Despite a below average plot and a well below average script..." isn't exactly a ringing endorsement of the latter. --Kels (talk) 17:00, 23 December 2009 (UTC)
- Most Hollywood studio films do not have great stories or great scripts. Most movies are carried by acting. My only concern about Avatar was that animating everything in CG would destroy the acting element as it has for so many CG films thanks to poor animation and/or the uncanny valley - especially since it was trying to be the first photorealistic-CGI-protagonists film. That concern was obviated by the movie itself. "The trailer is the movie" movies don't have to be bad, after all recent examples of these movies include Titanic, The Dark Knight, Up, Pirates OTC, etc.
- The Sherlock Holmes movie looks like a crapsack action movie that someone, at some point in the script rewrie process, decided to emboss with the Sherlock Holmes brand. I didn't get the slightest sense that RDJ was Holmes or that the movie was taking place in the 1890s. WodewickWelease Wodewick! 22:21, 23 December 2009 (UTC)
merry holiday!![edit]
Xmas comes early in my part of the world..Aceof Spades 11:52, 24 December 2009 (UTC)
- Only two bottles (and a possible third hiding back there on the right), one of which is a Corona? You're slipping, man. --Kels (talk) 16:59, 24 December 2009 (UTC)
Speaking of the season... --Kels (talk) 03:37, 25 December 2009 (UTC)
- Happy hols eveyone. I'm about to tuck into an awesome looking turkey cooked by the sister & brother in law. Oh, and lager, of course. CrundyTalk nerdy to me 12:28, 25 December 2009 (UTC)
NORAD tracks Santa[edit]
I find this hilarious. Not sure why. Tetronian you're clueless 13:31, 24 December 2009 (UTC)
- They've been doing that for a few years now, I think. Do you think Google will do their own? It's right up their street on holiday pranks and "easter eggs". sshole 16:42, 24 December 2009 (UTC)
- Ah, of course. The NORAD one also has a Google Earth plug in, they seem to be way ahead of me! sshole 16:43, 24 December 2009 (UTC)
- I'm going to groom my kids into being Santa apologists, equipped with evidence like this and a bottomless sack of rationalizations. Then I'm going to wait and see if and how they reconcile their beliefs with reality. Then I'm going to hold it over their heads for the rest of their lives. "Remember when I had you believing in Santa Claus until you were almost 18? Yeah, that's right, YOU'RE MY BITCH. Don't ever forget that. Love you."— Sincerely, Neveruse / Talk / Block 16:53, 24 December 2009 (UTC)
I will be go to hell[edit]
Apparently Jesus and Santa have something in common after all! --Kels (talk) 20:28, 24 December 2009 (UTC)
3AM 25 Dec 2009[edit]
Have a nice day, everyone. I am eating & honeychat 02:58, 25 December 2009 (UTC)
- 4:02pm 25 Dec 2009 here. Drinking a beer sent from Crundy and feeling pretty pleased by the whole shabang. Aceof Spades 03:03, 25 December 2009 (UTC)
- 10:04pm 24 Dec 2009. Hoping the freezing rain and icy roads scheduled for tomorrow morning doesn't keep me from making my rounds. Aboriginal Noise with 4 M's and a silent Q 03:07, 25 December 2009 (UTC)
National Geographic[edit]
PalMD might get into print now that ScienceBlogs & NG have teamed up. just the one link: it's all over, over there I am eating & honeychat
We have another Republican to hit golf balls at[edit]
I just created Carly Fiorina. Feel free to add crazy things she's said, some classic fuckups she did at HP, or anything else of interest.
Cheers, The Wine of TyrantsDrunk with power again!09:21, 26 December 2009 (UTC)
Festive logo[edit]
Extensive discussion moved to Forum:Festive_logo- π 09:21, 26 December 2009 (UTC)
Must we vote on everything?[edit]
Moved to Forum:Must we vote on everything?- π 09:21, 26 December 2009 (UTC)
Six days left and I'm still uncomfortable calling them the "ohs", "aughts" or "noughties". How about you?[edit]
Civic Cat (talk) 17:16, 24 December 2009 (UTC)
- Someday we will look fondly back on the times we had in the Year Nine. If it was good enough for Jack Aubrey, that should suffice. Sprocket J Cogswell (talk) 17:19, 24 December 2009 (UTC)
- I look fondly back at Au Revoir Simone here. Sounds so much like OMD.
:-D
Civic Cat (talk) 18:03, 24 December 2009 (UTC)
- "Naughties" works for me. ħuman 00:00, 26 December 2009 (UTC)
- I'm undecided, it works, but it's crap. I did hear "tennies" said on TV a couple of days ago, perhaps the start of something semi-official, but the guy did cringe with self depreciating irony as he said it. What the fuck did people do this time 100 years ago? And imagine the issues with doing it in base 16... sshole 00:36, 26 December 2009 (UTC)
- The first decade of any century tends to get conveniently forgotten about, at least partly because it's so awkward to refer to. Nobody talks about the "nineteen-noughties". If they need to talk about that decade at all, they usually say "early 1900s" or "turn-of-the-century era" or some such. In time, people will find some similar phrases for this decade. WèàšèìòìďMethinks it is a Weasel 00:42, 26 December 2009 (UTC)
- IMHO "aughts", "noughties", and "ohs" are all awful names. We'll probably just end up calling it the "first decade" or some similar cop-out. Tetronian you're clueless 04:02, 26 December 2009 (UTC)
Corporate/political Christmas cards[edit]
This truly appalling Christmas e-card from the UK Border Agency was circulated at work recently, to great hilarity. (I don't work for the UKBA, but my employer is an immigration sponsor; apparently the UKBA sent this out to a lot of organisations). The idea of putting serious government PR in the form of a greeting card is bizarre, creepy and just plain ridiculous. My favourite part is "we are getting stricter on those who don't play by the rules", just a couple of lines up from "Seasons Greetings and a Happy 2010". So, did anyone else get any horrible workplace &/or propaganda Christmas greetings they'd like to share? WèàšèìòìďMethinks it is a Weasel 18:30, 25 December 2009 (UTC)
- Just their way of telling Mary and Joseph there's no room at the inn. They'll just have to stay in some makeshift hovel in France. Auld Nick (talk) 18:43, 25 December 2009 (UTC)
- Wow, that card is just plain bizarre. No, I've never seen anything remotely that stupid. I don't get as many as I used to, but I used to get xmas cards from numerous suppliers, and all they ever were was nice greeting/xmas/holiday wished cards. ħuman 20:55, 25 December 2009 (UTC)
- Oh, and I used to get xmas cards "from" Al Gore & family, and they were just nice cards, too. ħuman 20:56, 25 December 2009 (UTC)
- That's pretty f****d up... I got a "corporate" card from the uni I applied to for next year, but it was just the usual message and pretty nice really. UKBA is just very surreal and vaguely facist in everything it does, IMHO. -- 23:30, 25 December 2009 (UTC)
Yule goat[edit]
Now my Christmas is complete. Tetronian you're clueless 05:12, 26 December 2009 (UTC)
- I love it - goats and vandalism all rolled up into one famous best of the public effort! — Unsigned, by: human / talk / contribs 05:23, 26 December 2009 (UTC)
How did we fare out?[edit]
Just curious about what some of us got stuffed in our stockings. Right now I am alternating between Assassin's Creed 2 and Koushun Takami's Battle Royale--Thanatos (talk) 03:04, 26 December 2009 (UTC)
- Well, most of the people around me either don't do Xmas or are too poor to do Xmas, so the only physical presents I got were from my mom who insists. The usual mostly, a blouse that doesn't fit, art supplies I don't use, a watch that doesn't go with anything I have, and jewelery that's actually not bad. The watch is sort of a running thing, she always gets a watch, every time, even though I don't actually wear watches. However, she surprised me this year by getting me an iPod Nano. NOT something I expected in the least, although it's appreciated. Certainly more appreciated than the non-intuitive pain in the ass that Apple gave me for a setup of the iPod and iTunes, which I totally don't want but apparently have no choice. Thanks Apple, for making me waste so much time figuring out how to use your crap. --Kels (talk) 03:12, 26 December 2009 (UTC)
- A nice dinner with my family--haven't seen them for a year...TheoryOfPractice (talk) 03:23, 26 December 2009 (UTC)
- I had a 4-hour discussion with my stepmom's stepdad about Ayn Rand (he's an Objectivist), evolution (he doesn't beleive in macroevolution), and UFOs (he believes in that Zecharia Sitchin crap). So I got to show off everything I learned here on RW in front of my family. That alone was an awesome gift. Tetronian you're clueless 03:27, 26 December 2009 (UTC)
- It was over freezing, with "weather" promised soon. So I tried to bust up the glacier in my parking lot with my plow a bit. I also watched Total Recall. Is that what Kristmuss is for? ħuman 04:48, 26 December 2009 (UTC)
- At Tet, wow, that sounds like... fun? ħuman 04:49, 26 December 2009 (UTC)
- At Kels, well, at least she cares and tries? Amusing though how the endless useless shit get bought and given. At TOP, good times (I hope). ħuman 04:50, 26 December 2009 (UTC)
- Twas much fun. Few things are more fun than talking about teh crazy with family. Tetronian you're clueless 05:01, 26 December 2009 (UTC)
- She tries and cares and I love her for it. The watches are like a running joke that we never actually say out loud, it's been going for years, and if I can't use a gift I usually know someone who can so it's all good. Less love for Apple, though. --Kels (talk) 05:20, 26 December 2009 (UTC)
- Thought of you northerners as I munched on turkey & gammon, sitting next to the pool. Fared pretty well - Father Ted box-set, District 9 DVD, pirate copies of Franklyn and Nine, Donnie Darko, socks (*sigh*) and a Haruhi wall-hanging, of which I'm kinda proud and ashamed at the same time. (PS Battle Royale = very good, avoid the sequel). --PsygremlinSprich! 10:39, 26 December 2009 (UTC)
- Actually, I got the novel. I kinda like to imagine Andy tearing into it, going on about how it is a liberal attempt to use lifeboat ethics. (PS Haruhi Suzumiya? Could it be there is another fan onsite?)--Thanatos (talk) 02:15, 27 December 2009 (UTC)
- I got six new white handkerchiefs. ГенгисRationalWiki GOLD member 14:00, 26 December 2009 (UTC)
- That means you can get married six times! --Kels (talk) 15:47, 26 December 2009 (UTC)
- Once is twice too many. ГенгисRationalWiki GOLD member 17:46, 27 December 2009 (UTC)
I got a much needed fridge. I also got The Post-American World, and I have to say, receiving a Christmas present is an odd way to find out you're actually a Muslim. DickTurpis (talk) 02:38, 27 December 2009 (UTC)
Pussy power[edit]
#2 cat has just come in rather the worse for war: clawed face & shoulder. t'other ½ is tending to him: most amusing: he's wrapped, nay tied, in a towel (which will never again be usable) while she trims and mops his fur. I am eating & honeychat 02:17, 27 December 2009 (UTC)
- Gawd, I had a cat who went through everything - skunks, porcupines, and his own endless murder spree. And a $350 or so dental appointment. He disappeared, as most of them do, probably trying to kill a badger or something. I hope yours is well soon enough. ħuman 05:56, 27 December 2009 (UTC)
- Skunk I have seen & cetera, possums and coons been seen marching around, even thought I saw porcupine tooth marks (seen AFOAL of beaver tooth spoor) but badgers? Не знаю. If the cat hasn't withdrawn from human ken, it most likely means to keep on being part of the scene, so worry is useless. Sprocket J Cogswell (talk) 07:00, 27 December 2009 (UTC)
- Skunk I have washed off the cat; never seen a possum, have seen 'coons climb walls. I fight an annual battle with the porkies, since they love to devour spring tree bark, especially my willow. Have seen a badger walk past my kitchen door (a young one) and there's a den out in the woods. Badgers are fucking mean, and a cat, no matter how tough, is no match for an adult badger. Not sure how they'd do against the local foxes, but squirrels are pretty much cat toys. And then there's the deer... a different game all together. ħuman 07:07, 27 December 2009 (UTC)
- If you've not seen a possum then you've been looking elsewhere at exactly the right time. Country breezeways or town streets are equally welcoming. Just like rats only bigger and with sweet gnarly faces. If they can't run away they really do go catatonic, and none of your persuasions will avail. My mom used to hang soap and human hair clippings in the veggie garden to try and keep the deer away. Just you try and keep them away. Hah. Sprocket J Cogswell (talk) 07:18, 27 December 2009 (UTC)
- For deer I have considered purchasing some largely defensive weapons of food-shootin'. I've come face to face with a big doe outside my kitchen door, bucky and three chillun-deer were busy chewing the scenery. Pow, pow, powpowpow and I could have had 400 pounds of freezer fodder. ħuman 07:24, 27 December 2009 (UTC)
- Me and my brothers had hamsters growing up at various points in our childhoods. The last one I had died in 2003 (I think it was then) from wet tail disease which is kinda like the hamster equivalent of bowel cancer. That same year I got two goldfish and kept them in the same bowl. After five days one of them died. Nearly SEVEN YEARS LATER the other one is STILL ALIVE. Now as a uni student and grown man I'm pretty fucking sick of being welcomed home by a stinky goldfish bowl. I might get a dog when I get my own place. SJ Debaser 13:07, 27 December 2009 (UTC)
- Get a cheap ten-gallon aquarium and a cheap air-driven filter that allows you to use carbon and some foam. Don't worry about gravel and shit like that yet. Pour entire contents of smelly goldfish bowl into aquarium, add a little more water that has been "aged" each day. Buy nice cheap aquarium light, add bubbling diver and a pretty piece of rock. Watch goldfish double in size in three weeks. PS, don't overfeed. A tiny pinch a day is enough. ħuman 04:57, 28 December 2009 (UTC)
Let's fucking condemn this place![edit]
Or at least use it only for trivial matters. Thanks to Nx the forum looks awesome, and it is probably a much better place for the enormous threads that develop here. Though I love the SB and I look forward to reading more of the miscellaneous discussions that will happen here, I think we should relish in the opportunity to use the new sandbox Nx has made for us. Tetronian you're clueless 03:49, 23 December 2009 (UTC)
- Yeah, but you "kill the SB" people are ignoring that we never know when a thread will be "big" or trivial. The SB is where this stuff should start, big threads get moved to the forum. What happened to the thread above where I described and argued this case? Tet, many threads here only run for two comments and one day. ħuman 04:05, 23 December 2009 (UTC)
- Agreed, but I still think that when starting a serious thread (or at least one that the poster knows will create enormous interest) we should first look to the forum. I don't think we should
accelerate the afterlife forkill the SB, just use it a bit less now that we have something better. Tetronian you're clueless 04:08, 23 December 2009 (UTC)
- "one that the poster knows will create enormous interest" - there is no way of knowing that in advance. I have started threads that I thought would be awesome, only to get no comments. People make offhand comments, and the thread turns into Godzilla. Do you see my point? Start here, cut/paste top forum after some level of interest and editing? ħuman 04:13, 23 December 2009 (UTC)
-
- (EC) Tetronian (obviously not your real name), intolerance for the saloon bar is typical of deceitful liberal trolling. Wikipedia doesn't allow articles about the saloon bar because it is run by liberal atheists. I have looked at you recent edits, and it appears that you believe (contrary to evidence) that Gawd did not create the saloon bar when, in fact, Gawd did. Continuing to fill our pages with such lies will earn you a block. Godspeed! Lord of the Goons The official spikey-haired skeptical punk 04:17, 23 December 2009 (UTC)
- @Huw: I see your point and I suppose that is the best method. Although I think in that case the size of this page will only be slightly lessened. But whatever, you're probably right. Tetronian you're clueless 04:21, 23 December 2009 (UTC)
- Thanks... hopefully people will learn to recognize the Godzillas early on and move them to the forums? Perhaps down the road we will also learn to start "light and trivial" sections here, and "serious for the ages" threads at the fora. During transition is a difficult time to predict the future. ħuman 05:18, 23 December 2009 (UTC)
- At the moment, it's a judgement call on behalf of the person starting the new topic. I'm happy to leave it up to them to decide. I'll certainly check both, it's not particularly difficult considering I already have 2-3 email accounts, RW, a forum or two, DeviantART and Arsebook to check (and this is a lull in my Internet forum whoring activity). So this isn't a problem to add another half a place to check. sshole 17:00, 24 December 2009 (UTC)
- See Forum:Do_you_prefer_this_or_the_Saloon_bar?#Suggestion.2Frequest -- Nx / talk 17:05, 24 December 2009 (UTC)
- This place suits me better than that forum place. I think the forums would be a good place for SeriousWiki:SeriousBusiness and this place can be for kicking back and bullshitting. — Sincerely, Neveruse / Talk / Block 17:02, 24 December 2009 (UTC)
Forum contents box[edit]
I stuck in the DPL Nx wrote to test drive it at the top of the SB, hopefully between the TOC and chalkboard for most screens. It probably needs some tweaking, but what do people think, in general? ħuman 22:32, 24 December 2009 (UTC)
- They don't line up with each other but they look good to me. I think we need to do something with the top of the bar. It's nice and "quirky" and all, but it's very intrusive and looks a bit on the messed up side. sshole 00:42, 26 December 2009 (UTC)
- I like. WèàšèìòìďMethinks it is a Weasel 00:52, 26 December 2009 (UTC)
- I'm glad it's there, and glad you guys approve, and yes, it would be really nice if it were the same distance from the bartop as the chalkboard. A single br/ was too much space. I think the bartop is bit taller than it needs to be. I also considered pasting the DPL list into the chalkboard, but I didn't want to muck it up too much. One last comment, the way I did it was to simply paste in the whole DPL segment, really it should be in a subpage or template and then transcluded. ħuman 23:12, 26 December 2009 (UTC)
- I removed that whitespace, but Pibot added it back. hopefully the way I changed it now won't break after Pibot edits it again. Anyway what are we going to do with the Saloon bar? I have suggested making it a stickied thread in the General discussion forum. This would involve moving the Saloon bar and its archives to the Forum namespace. Forum:Saloon bar test is how it would look with the forum thread header template. In this case we could remove the forum toc. -- Nx / talk 00:04, 27 December 2009 (UTC)
- Leave it as it is. I am eating & honeychat 00:12, 27 December 2009 (UTC)
- It'll be good to integrate the SB into the forum structure, but I reckon it would be nice at the top of the list practically in its own category, rather than looking like a "sticky". And providing the sidebar link still works and the redirects that are set up to direct to it change accordingly, I don't think people will even notice the difference. sshole 00:56, 27 December 2009 (UTC)
I think part of the layout difficulty is that the chalkboard thing is called up by "bartop", making it harder to prettify things. Also, of course, the pibot template eats up some space. Can we perhaps integrate all these things into bartop so it is one simple block at the top? Maybe bartop has too much clutter? ħuman 05:54, 27 December 2009 (UTC)
- If we give it some time, a week or so into the new year and review it. Clearing up the bartop would be a good thing to do if we merge it into the forumspace. I'm in favour of doing that; as I noted above, it won't make much practical difference as all the redirects to it will change accordingly and it will look and act the same; it'll just chance namespace and have a link to it on the forum thread list. It would also make it easier (from a "do we want to" angle, rather than a technical angle) to split bits off. sshole 18:18, 28 December 2009 (UTC)
Can we add the last edit date/time in small font after the thread title? ħuman 22:30, 28 December 2009 (UTC)
- Done, and I've added a different background for watched threads with new edits. -- Nx / talk 23:08, 28 December 2009 (UTC)
The "Cloward-Piven strategy"--anyone familiar with this?[edit]
OK, I know this op-ed piece is more than a year old, but apparently it has been making the rounds on various right-wing blogs. This, my friends, is the new right-wing narrative of the financial crisis. Hell, even Wikipedia has a page on it (though sanitized of most of the conspiracy stuff).
Enjoy. --Wet Walnuts (talk) 07:39, 25 December 2009 (UTC)
- I have yet to see any version of Cloward and Piven's article online. You could check JSTOR or EBSCOhost if you have access to it. (I do, through the Rutgers University Library, so maybe I will look for it later). I don't think there are any free versions of it though. --Wet Walnuts (talk) 09:05, 25 December 2009 (UTC)
- I think I meant can we make an RW article on it? ħuman 23:21, 26 December 2009 (UTC)
OK, I started an article on this: Cloward-Piven strategy. It is based on the current text of the Wikipedia article, so feel free to edit it mercilessly. I will add some more material later. --Wet Walnuts (talk) 17:18, 28 December 2009 (UTC)
'Tis the season....[edit]
Thanks to this lovely strain of winter weather in the midwest, my flight back to Minneapolis arrived at 11:00 CDT this morning; 14 hours after scheduled arrival!!! Lord Goonie Hooray! I'm helping! 20:35, 27 December 2009 (UTC)
- Did you hear about the Eurostar shizzle over here in the week before Christmas? People were stranded on both sides of the channel cos the trains got cancelled due to the weather conditions, which were largely snow-related. SJ Debaser 22:53, 27 December 2009 (UTC)
- Amazingly, it's been raining for the past 3 days here in New Jersey, but today was a beautiful day and it was warm out. I just don't get it. I pity you people who get more than 10 inches of snow each year. Tetronian you're clueless 22:58, 27 December 2009 (UTC)
- I am currently back home in Soviet Canuckistan and am reading with dismay news reports about 3-4 hour and longer delays b/c of post-last-bit-of-badness-enhanced-security-measure. I have a flight home in a few days with a connection in Chicago that I have a half-hour to make. No way am I making it. TheoryOfPractice (talk) 04:21, 28 December 2009 (UTC)
- All the global warming shit IMHO. We've fucked up the planet enough and now we have to clean up the mistakes our ancestors made. Of course, some people can't be asked with this. SJ Debaser 01:39, 29 December 2009 (UTC)
- "Asked"? Did you mean 'arsed'? Or is this a young persons' neologism that I am unfamiliar with? ГенгисRationalWiki GOLD member 10:29, 29 December 2009 (UTC)
What the fuck happened?[edit]
(Sorry for the long post. This is all hitting me just now.)
May parents are getting divorced.
There has been no screaming, no threats, no fighting, no lawyers—no indication of anything. Mom and Dad are handling it the same way they've always handled everything—quietly and austerely (maybe that's why everything went wrong in the first place?).
What am I supposed to say? What am I supposed to feel? Am I supposed to feel angry? Guilty? Because I don't. I don't know what normal people are supposed to say. I want to say to Mom and Dad that they will always be my Mom and Dad, even if Dad lives in the next town over, and that I will always be their son. Can I tell Dad that I'm impressed with how mature he's been about it—admitting it's entirely his fault, and promising that he'll still support all of us even when he's no longer living with us? Can I tell Mom to please stop apologizing—that I trust her to know what's best for them, so she can stop second guessing herself?
Check that—I guess I do feel something; I feel unhappy (for lack of a better word). See, over the past thirteen years, Mom and Dad have worked their way up from being part of the "working poor" into solidly middle class territory. They progressed from having one crappy, dying car, living in the cheapest house available for rent (no one would loan them money to buy one) in the dumpiest part of a small town in the poorest region of Oregon, with no money and working several jobs at a time, to having two new cars, owning (!) a nice, new house, and each having relatively flexible, well-paying jobs. And their children are growing up. Their oldest child may not be financially comfortable, but he is independent, and clearly has big things ahead of him. Their middle child is finally wising up and has made his first major steps toward getting out of the unending shitstorm rat race that was his life. I'm preparing to go to college, and to live on my own. Everything was getting better. In a decade or so, they could even retire, and maybe even live comfortably for the rest of their lives, and be loved by their three wonderful, grown kids, and maybe even grandkids. We were living the American Dream—that with hard work and discipline, you can make a better life for yourself. Now what—does the American Dream include your marriage going down the shitter, too?
It was supposed to be perfect, it was going to be perfect, and now it's not.
What the fuck happened? Radioactive afikomen Please ignore all my awful pre-2014 comments. 03:46, 28 December 2009 (UTC)
- They outgrew each other? They spent 20-odd years kicking ass and taking names, and giving you kids a better life than they had, and doing that job leaves precious little time for "staying in love" or whatever that is. Oh shit, that sounds like I'm blaming you kids. I'm not. It was what they wanted to do, that their intimacy or closeness got lost along the way... was sadly typical. I hope that in the long, and even short, term, they will be best friends, after all, they shared so much. It's still "perfect" - you have two parents who love you and care for you and want the best for you, and can even probably help you get to where you want to be in life. They just aren't husband and wife any more? Maybe I'm not the best "counsellor" here because 1) my parents are still together (although I'm sure they've been through plenty of shit) and 2) I never parented, just, well, dated a lot of divorced women... Anyway, Jacob, while RW might not be the best place to ask your question, you're doing the right thing by asking it. Do you have friends with divorced parents, etc.? Reach out to them as well. Everything will be ok. Nothing is ever "perfect". ħuman 04:02, 28 December 2009 (UTC)
- Aw, man, RA, that well and truly sucks.
It's probably your fault for being such a rotten kid.Sorry to hear the bad news--but hell, you may be in the minority among people your age (...and BTW, I thought you were well into your twenties for some reason) in having your folks together at this point in your life. It strikes me that as much as this does suck, we as a society have gotten better at making the process easier for everyone involved--much more so than when Huw or I were your age, anyways. Anyway, as the old man above said, it'll be okay in the long run for everybody. My divorce damn near killed me, but it wasn't too too long before I got over stuff and really turned my life around. You're young and resiliant--you'll bounce back. TheoryOfPractice (talk) 04:16, 28 December 2009 (UTC)
- Right now you don't need to be told that life goes on, so I won't. You will find that others who have been through it understand a bit better than the ones who have not. Divorce is well known to be a crazy time, and I hope you can make it through with a level head. One breath out, another one in; sometimes that's all you need. Sprocket J Cogswell (talk) 04:46, 28 December 2009 (UTC)
- First of all nobody knows what really happens in someone else's relationship, particularly if the couple have any sense of privacy. Most of the disagreement is either unspoken or goes on behind closed doors. I don't know what age all your siblings are but couples often stay together for the kids and if the kids are grown up enough then maybe they can finally get the divorce that they have probably wanted for a long time. Romantic love can overcome differences for a certain amount of time but relationships change as the individuals change (or conversely, they do not change). I have known many people who lived together for years then decided to get married and find their relationship break down in a very short period; the dynamics of the relationship changed. It is hard to place blame on any individual because even if they say it's all their fault they may only be doing that to smooth the process. Getting divorced isn't any fun, especially after many years together. So if they can do it in an amicable way and remain friends afterwards then treat that as a consolation. What we may think of as an ideal situation is often only transitory. At least you are still going to have relationships with both of them. It may happen that those relationships lead to something else as yet unknown for your parents and even members of your family. Hopefully your parents can find new happiness and fulfillment which had been missing. It is better that they separate amicably rather than grow old together living with resentment and bitterness. ГенгисRationalWiki GOLD member 10:46, 28 December 2009 (UTC)
- Hmm... were both of your parents conservative? Sorry to hear that RA, I know it's not easy. While I myself can't relate, a lot - and I truly mean A LOT - of my friends have divorced parents, so you're not alone. My best mate's parents got separated when we were about 13 totally out of the blue - my friend dealt with it fairly OK, his whole family still gets along, and he still sees his dad a fair old bit. Another person I know had her parents divorce just this summer - they, like your parents, waited 'til she was going to university, as they told her a couple of weeks before she left. As someone in your age group, and someone that started university this year, let me tell you from first hand experience that it's a great way to escape problems in the outside world. SJ Debaser 12:13, 28 December 2009 (UTC)
- I was reading an article about how adults are affected when their parents divorce. You have plenty of help for young kids and plenty of sympathy but when you're a bit older or possibly left home, everyone just expects you to man-up and deal with it and see it for what it is, a separation that doesn't negate what your parent's feel about you or destroy the past completely (which is probably what Human is getting at above; please don't join the Samaritans, it'll be Lemming Tuesday all over again...). So when your're over 18 (hell, over 15 half the time) and it happens few people seem to sympathise as few people experience it at that age and just expect you to roll with it the same way as if you heard they were buying a new car - and that's just unfair really. So be angry, shocked, annoyed or whatever, you have a right to do that whether you're 13 or 30 when your parents split, but you have to reach the same conclusion regardless of your age; accept it as their decision, for the best, and that it's no reflection on you. And I doubt you're letting the American Dream down the shitter, you're just experiencing part of what modern culture has, get them a divorce cake or something if they're in the spirits to handle it. You're lucky it seems mutual and fairly peaceful, though; I got woken up at 3AM by the shouting match that split my parents up permanently (I was just on the verge of being old enough to "deal with it"), which wasn't particularly fun. sshole 15:29, 28 December 2009 (UTC)
- Nothing is ever perfect. When things are going smoothly and all is looking good , something nasty is about to happen, thats just the way things work. Its possible that with a seperation your parents will establish a new relationship and decide to stay together. Its also possible that the split will let them stay on friendly terms and get on with the rest of their lives. The important thing is for you to realize that in any relationship BOTH people bear some responsibility for what happens. Try to stay out of arguments and focus on getting your life going in a good direction. You may need to try for more financial aid, scholarships or part time work to make it all work. Good luck Hamster (talk) 18:44, 28 December 2009 (UTC)
NephilimFree[edit]
Hey, currently in a debate with NephilimFree in the comments section of one of my youtube videos, and he's trying to Gish Gallop on me. He made one claim, and I'm not sure where to look to find refuting evidence. He says that 80% of human proteins don't exist in apes (debate here: ).--Mustex (talk) 07:09, 28 December 2009 (UTC)
- That is not even wrong. Humans would have that many proteins in common with plants I would suspect. - π 10:09, 28 December 2009 (UTC)
- Depends on the metric you use. If you're talking about exact structural match-ups then I'd say he might be right. There's slight differences between dehydrogenase in slugs and dehydrogenase in humans, for example, this is just what evolution does and citing it as evidence against evolution is outright stupid. However, if you use types of protein, or a less rigorous match up process looking only at tertiary strucutre and ignoring the odd amino acid out of place outside the core centre of the protein, then they're practically identical in most animal species - we'll all have particular enzymes to do particular functions. In wildly different species, they could be wildly different (haemocyanin, haemerythrin, haemoglobin), in very similar species, they may have very similar structures. But the best thing to do is to ask for the reference for "80% of human proteins don't exist in apes" - because the mornon is either talking out of his arse or using evidence for evolution against it. sshole 15:16, 28 December 2009 (UTC)
- I once spent 2 weeks trying to get NF to admit that different breeds of dogs had different shapes, and therefore different morphologies. In my most recent encounter, I challenged his claim that American Indian tribes across the continent had a universal greeting gesture. This time, he readily admitted that he had pulled the claim out of his ass...in his own creepy, condescending way. — Sincerely, Neveruse / Talk / Block 15:45, 28 December 2009 (UTC)
- He's also trying to claim that EQs prove that humans and apes can't be related (he cited an article on that, which I need to get off my ass and look up later today, but I find it hard to believe that anyone could suggest that the ratio of brain size to body size could disprove the relatedness of two species).--Mustex (talk) 18:00, 28 December 2009 (UTC)
- No, you can't particularly do that with brain sizes. One of the things that sets humans apart from others is the extended brain power. That'd be like saying that a golden labrador and a chocolate labrador can't both be dogs because they have different colour coats; that's the major difference that differentiates them. Indeed, brain size and power can evolve at such a pace that of course they're going to be vastly different. sshole 18:06, 28 December 2009 (UTC)
Found on FoxNews.com comment section.[edit]
Regarding sanctions on Iran someone posted this nugget....
Charming. Aceof Spades 08:00, 29 December 2009 (UTC)
Back[edit]
What did I miss? Totnesmartin (talk) 10:02, 29 December 2009 (UTC)
Anyone read/heard of this dreadful sounding thing?[edit]
Sounds pretty much like the linguistic version of creationism. -- YossarianThe Man from the USSR 11:23, 27 December 2009 (UTC)
- More like a hardcover version of Conservapedia. The Goonie 1 What's this button do? Uh oh.... 20:37, 27 December 2009 (UTC)
- Dear fuck that first page that you can preview is not even wrong... sshole 23:19, 27 December 2009 (UTC)
- That's scary shit. I can't believe no one called him out on his bullshit in the reviews section. Tetronian you're clueless 15:40, 29 December 2009 (UTC)
Brown like that[edit]
Thought you'd like this little rap. "I do that thingy with my mouth and then I frown like that". CrundyTalk nerdy to me 14:46, 29 December 2009 (UTC)
Tea partiers[edit]
Who here has gone to a "Tea Party" or done any "Tea-Bagging" lately? I know I went to a local Tea Party here in the Twin Cities to protest big government taking all my hard earned money (from playing the stock market), and there were literally tens of people there, all ready to go teabagging all over the Minneapolis area! I must say, it was a gay old time, indeed! Conservative Punk (talk) 04:45, 30 December 2009 (UTC)
- Did you go to the liberal media tea-bag protests to protest that the media ignores the tea-baggers? I heard one got over 100 protesters. How much longer is the mainstream media going to ignore this growing movement? - π 06:43, 30 December 2009 (UTC)
Positive Woo?[edit]
Ok, normally I'd put this on the to do list, but I wanted to get some actual feedback on this. I'm considering a page on "positive woo" (btw, I got distracted by finals and Christmas, I promise to get the David Farrant page done soon). This was kind of inspired by the current page on the Weigh Down Diet, which I think is really overly harsh. First off, based on what the page says, the diet never claims that prayer literally burns calories, but simply that Jesus can motivate you to lose weight, and there's no question that religion can be motivating. Secondly, it annoys me because the page first acknowledges that "Shamblin states that someone should "listen to their bodies" and only eat when hungry, equating excessive eating with the sin of greed." but then goes on to say "While the diet doesn't advocate anything bad, it may lead people to possibly starve themselves unnecessarily. The human body produces hunger and thirst pains for a very good reason and ignoring these can be outright dangerous." ignoring the fact that the article already stated that the diet encourages people to eat when (and only when) they're hungry.
What I'm thinking of with a "positive woo" article is that, when dealing with psychological issues (ie lack of motivation, depression, anger, etc), things that are basically woo can still produce the desired effect, and even if they're not true they thus cease to be scams (hell, this is essentially what Shamans have done throughout human history all over the world, the perform a cultural ritual and fix some variation of depression). Granted, the ethics of using the placebo effect in this way is still questionable, but if the woo provider believed it, and it produced the desired effect, I think it would be hard to say it was truly wrong. Does anyone agree that: a) this would be a suitable argument, and b) the Weigh Down Diet article should be revised?--Mustex (talk) 01:43, 26 December 2009 (UTC)
- Weigh down looks fine to me. I think you are looking for the placebo effect? ħuman 04:46, 26 December 2009 (UTC)
- To an extent, yes, but I'm talking more about a specific use of the placebo effect, and the ethical implications thereof, than about the placebo effect itself. And, if you think the Weigh Down Diet page looks ok, would you please explain to me why this diet is any more likely than any other diet to cause people to starve themselves?--Mustex (talk) 05:01, 26 December 2009 (UTC)
- I thought I was being quite nice with that article, actually. The reason that it says "starve themselves" is based on the criticism by a qualified dietician in the article I was reading on it that brought it to my attention. Diets such as this do advocate eating less, but people can take this too far, in the case of Weigh Down, people are lead to ignore hunger pains which can be dangerous (one of the best parodies of diets I've seen is on The Devil Wear's Prada where one of the girsl mentions she's on a diet where she eats nothing until she's just about to pass out, and then she eats a cube of cheese!). But what you're seeing and noticing is actually very much the textbook case of woo. It's not that it doesn't work, but that the explanation around it is superflous and (by a logical, methodological naturalist and rational view) unnecessary. The Weigh Down Diet simply advocates not eating so much (just as Brain Gym very rightly advocates physical exercise if you cut out the bullshit) which is of course going to work. But the thing is, good advice, particularly good dietry advice, is free, can't be patented and can't be sold. So you take some good advice and mix it with something that sounds awesome, new, special. As Human points out, this can be "positive" because of the placebo effect, but this doesn't stop the mechanism being total and utter bullshit. Homeopathy "works", but only as a placebo, the whole thing about it being better than a placebo, and explanations such as the Prinicpal of Similars and Water Memory are wrong and can only be wrong. That's what woo is all about, it's about misleading people into thinking that they're doing something that they're not, and more often than not, making some cash out of the gullible fools while you're at it. The ethics of the placebo effect are related to this, namely that using a placebo as an intentional treatment (outside a trial) is unethical because of the deciet and breach of trust between patient and doctor (the thought that the placebo won't cure them and you're denying them treatment is actually less of an issue, as you would only want to administer placebos to none-threatening conditions, hypocondria, anxiety and so on, not cancer or anything like that). So too, with woo, you're decieving people by providing - well, selling - a bullshit explanation to go with something simple that would otherwise be free; "be positive", "don't eat so much", "be active" and so on. Mostly, this sort of deciet is considered morally wrong, so by extention placebos and woo explanations are also morally wrong - even if they do "work". sshole 22:50, 26 December 2009 (UTC)
- If people are encouraged to ignore hunger pains, why does the article say they're told to eat when they're hungry? If the diet tells them to eat when hungry, and they don't, that's on them. Also, yes the mechanism is woo, but any mechanism for motivating people will, at some level going to be woo. Willpower pills don't exist. But, if it works, is it really bullshit? Particularly if the people selling it actually believe it, and aren't just conning people out of their money? I mean, what separates this diet from any other book about positive thinking and religious devotional? At any point in this diet do the people claim it to be scientific, or anything other than a way to motivate yourself? As for the issue of selling it, I'm pretty sure that outside encouragement can be a big motivator, which I'm sure some people would be willing to pay for in book form. It feels good to have someone tell you you can do it, or that God can help you do it. And if the person is not a doctor, and sincerely believes in the religious system he/she is teaching, then I fail to see what possible problem there could be?--75.107.28.120 (talk) 05:44, 27 December 2009 (UTC)
- Mustex (I suppose). You say: but any mechanism for motivating people will, at some level going to be woo. I disagree. If somebody tells a smoker "If you don't stop smoking then there is a good chance you will die." That may motivate them and there would be no woo involved.
- On the other hand, it is certainly true that getting people to pay money for something makes them more likely to value it and increases the placebo effect. But so what? Imagine that I sell you expensive colored water and tell you that it will make you feel better because the great god RA has blessed it - and this works becasue it gives you a good shot of the placebo effect. I make a nice profit and you are happy. I may even be sincere. (I suspect that most homeopaths are sincere.)
- By your argument there is no problem because I'm sincere and the product "worked". But at another level we are both completely deceived - and this would seem to be a problem, especially if we go on from there to make other decisions about the power of RA based on this belief.--BobBring back the hat! 15:29, 28 December 2009 (UTC)
- We can condense the point further: do the ends justify the means? What if the ends of a religious based diet ends up with them becoming a fundamentalist or part of a cult when all they really needed was some encouragement from friends and some advice from a dietician (which, like Bob's smoking analogy, is not woo), is that particularly right? Maybe, maybe not as it's mostly irrelevant - the real point is whether the ends justify deceit, obfuscation, bullshitting, tricking, conning, stealing and the promoting of ideas that could be applied in contexts where they are not only inappropriate but downright dangerous (praying for your nerves or taking a homeopathic remedy for "feeling under the weather" might be relatively harmless but this leads to people starting to believe that the same mechanisms can work for cancer or HIV, and that is the biggest problem).
- But not all mechanisms for motiviation or anything that works on a psychological level is woo, if you call it the placebo effect, it's not woo, if Derren Brown openly admits at the start he's going to be a dirty trickster and confuse people for entertainment, it's not (strictly) woo - whereas on the other hand, a homeopathic remedy based on water memory is woo, and someone claiming that they can talk to the dead when they're really just cold reading is woo (and outright sick). The point of woo is that it's superfluous window dressing to what really works. Because a woo explanation is patently false and when you apply it, you're encouraging people to think that it's true - and this leads to, as I said above, the biggest danager; that people will apply it where it is inappropriate. sshole 15:51, 28 December 2009 (UTC)
- Yes, but there are still other issues to be considered. First off, I think homeopathy is wrong, even when the practitioner believes in it, because it clearly doesn't work, and can thus prevent other treatments, and provides no benefit for the money. If I cure your depression, though, for the cost of a book, clearly you've gotten your money's worth. Furthermore, you seem to think that "you'll die" is as motivating as God, but I don't think that's necessarily the case (ie suicide bombers), particularly when the death by cardiovascular disease is many years in the future, as opposed to "sin" which is something that's happening now. I would certainly object to using it to convert people to another religion, but if its a system that works within the religion they were already a part of, so what? They still believe exactly what they did before, and at no point has the practitioner lied, and the benefits clearly did outweigh the price of the book if they lost weight. When you say that this diet could be used to get people to join cults, that's just Argument from Adverse Consequences, unless the diet is being actively used to recruit people into a cult.--Mustex (talk) 19:04, 30 December 2009 (UTC)
Request - Acrobat[edit]
Does anyone here have a full copy of acrobat (as in not reader)? If so then could you convert a table in a PDF document to an excel spreadsheet for me? CrundyTalk nerdy to me 21:20, 29 December 2009 (UTC)
- I've got Acrobat Professional 8.0. Think that will work? Macai (talk) 21:21, 29 December 2009 (UTC)
- Nutty's emailed me so I should be fine, but thanks for the offer. CrundyTalk nerdy to me 12:53, 30 December 2009 (UTC)
Wikileaks down[edit]
"To concentrate on raising the funds necessary to keep us alive into 2010, we have very reluctantly suspended all other operations, until Jan 6" I am eating & honeychat 04:46, 30 December 2009 (UTC)
- Bummer, I hope they make it back. ħuman 06:27, 30 December 2009 (UTC)
- But you can still access Hovind's doctoral thesis. Tetronian you're clueless 16:17, 30 December 2009 (UTC)
Roman Tax Collection[edit]
Could someone please explain this system to me, because the way they always explained it to me in Sunday School seems highly implausible. Basically they claimed that people made bids for positions as tax collectors in Rome, at which point they were given a license to extort money from people, and anything they could take above their bid was their payment. The problem I have believing this is that apparently they were ranked (since Zacchius was the chief tax collector for the region), in which case what motivation does anyone have to bid for the highest ranking tax collector position, instead of going for the lowest ranking tax collector position, and thus being able to bid lower, and keep more? Someone, please explain.--Mustex (talk) 05:08, 30 December 2009 (UTC)
- Sounds like a MLM scheme to me - you want to have lots of people in your "downline" to get a little bit of cash from. ħuman 06:29, 30 December 2009 (UTC)
- I think this only happened under Caligula or one of the real nutters like that. The Roman treasury would only give licences to the highest bidders, say there were 10 licences and 50 bidders. So you had to bid high enough to get a licence, but not too high as you would not be able to cover your bid. I think the Romans got the money up front and you spent the year making it back. Public buses are often run under a similar system these days. - π 06:37, 30 December 2009 (UTC)
- Tax farming. - π 06:38, 30 December 2009 (UTC)
- Unfortunately this sort of thing still goes on. Some African countries pay their customs officials either very little or people pay to obtain those jobs as the illicit rewards can be substantial in comparison with pay levels of the general populace. ГенгисRationalWiki GOLD member 09:21, 30 December 2009 (UTC)
- Ok, but what powers does the Chief Tax Collector have that the other tax collectors under him don't, that allows him to make any more money than them? That's the part that always confused me.--Mustex (talk) 18:54, 30 December 2009 (UTC)
- Do you know how MLM works? Say I get 10% (bribe/kickback) of what 100 people make, and their earnings are 10% kickbacks on 100 more people each, who are all earning 10% kickbacks on what they do, I get the most money.
- Yeah, that's the deal. Imagine the pyramid; even though the guys at the bottom kick up only a little, there are a lot of guys at the bottom. The guy at the top gets all their little amounts, which adds up to a lot. Plus, being at that position means you can earn even more by accepting industry bribes. They would favor their friends or whatever industry paid them not to be taxed by the little guys.--Tom Moorefiat justitia 20:04, 30 December 2009 (UTC)
- That's only the early empire though. The late empire makes for an interesting comparison. Whole regions are assessed, and are required to contribute a fixed sum. The richest and most known person in the region is appointed tax administrator by force. He has to extort the taxes from his fellow citizens. If he doesn't get the whole assessment for the region he has to pay the difference himself. As you can imagine this sort of Scylla and Charybdis was something people tried very hard to avoid, even to the point of giving wealth away to family to avoid being the happy one chosen.
- The republic is also interesting. The basic and highly simplified system here is that after your term as praetor, quaestor, consul, etc. (i.e. in Rome) you spend huge sums trying to get elected as propraetor, proquaestor, proconsul (i.e. in the Provinces). Guess where those elected officials recuperated this investment (hint: see the etymology of province — pro-vincere, "after the conquest"). Pietrow (talk) 21:11, 30 December 2009 (UTC)
15 Most Heinous Climate Villains[edit]
I found this article I think you guy's might like.Ryantherebel (talk) 19:09, 30 December 2009 (UTC)
- Uh. Cool. Thanks for telling us about it. But you forgot the link. Here it is. --Tom Moorefiat justitia 19:20, 30 December 2009 (UTC)
- Oh shit! I'm terribly sorry.Ryantherebel (talk) 20:12, 30 December 2009 (UTC)
- I think all fifteen should be Inhofe, personally. He even tried to make every member of his committee read State of Fear.--Tom Moorefiat justitia 20:15, 30 December 2009 (UTC)
New Years hurrah[edit]
Its 9:03am Dec 31st 2009 and I am about to get my New Year started. Hope all enjoy themselves! [[User:Ace McWicked|Ace]][[User_Talk:Ace McWicked|<sup>i9</sup>]] (talk) 20:22, 30 December 2009 (UTC)
- Wow... my partying doesn't start for 36 hours or so. Good on ya.--Tom Moorefiat justitia 20:12, 30 December 2009 (UTC)
- Well, have a happy New Year, Ace. Mine isn't for another 32 hours, as well. AnarchoGoon Swatting Assflys is how I earn my living 20:27, 30 December 2009 (UTC)
- Mine isn't for about 22 hours. I'm still not sure what I'm doing. My ex has invited me to her party, but I really don't want to see her, because I'll have a shit time and the decade will be off to a poor start. What was my point again? Oh yeah, happy new year everyone. SJ Debaser 23:07, 30 December 2009 (UTC)
- Happy New Year all. I'm writing an essay about my reflections on the decade, I'll be sure to post it when I'm done. Tetronian you're clueless 23:18, 30 December 2009 (UTC)
- 'Tis done. See here. Tetronian you're clueless 03:43, 31 December 2009 (UTC)
- Love your new sig, Ace. Lily Inspirate me. 09:19, 31 December 2009 (UTC)
Happy New Year[edit]
Although I'm pretty sure that Gentlemen Pi and Ace have already been rendering drunken versions of Auld Lan Syne (sp?) by now, I'm about to hop onto an aeroplane, so let me wish you 'orrible lot an early and very happy, healthy and safe 2010. --PsygremlinRunāt! 09:01, 31 December 2009 (UTC)
- Auld Lang Syne, Psy. The 'lang' being Scottish for 'long' rather than referring to a Welsh saint. (PS. I know it should be ll.) Lily Inspirate me. 09:31, 31 December 2009 (UTC)
- The cognoscenti know not to cross the arms until the last verse - however most of us don't know more than the chorus and mumble the rest! Bob Soles (talk) 11:21, 31 December 2009 (UTC)
- Happy new year from Australialand, it's been 2010 for about 15 minutes now and so far it's turning out to be an alright year - I've aready been drunk for 100% of this year! -Redbackis gonna bite you 13:16, 31 December 2009 (UTC)
- Ugh, now I've been hungover for about 25% of this year. Where's the Vegemite? Nothing cures a hangover better than eating cooked beer waste. -Redbackis gonna bite you 21:14, 31 December 2009 (UTC)
- Happy new year from Irelandland.--Ask me about your mother 13:18, 31 December 2009 (UTC)
- You lucky bastards! It's not 2010 here for 14 more hours. Tetronian you're clueless 15:24, 31 December 2009 (UTC)
- It's not even drinking time yet. But it is almost pizza time. --JeevesMkII The gentleman's gentleman at the other site 17:17, 31 December 2009 (UTC)
- I just had mine. I'll probably order more later. Pizza does seem to be the typical New Years Eve food. SJ Debaser 17:21, 31 December 2009 (UTC)
Ugh, I fucking hate the new years. License to be a jackass. — Sincerely, Neveruse / Talk / Block 17:55, 31 December 2009 (UTC)
- 2009 couldn't end any quicker. This has not been an overall good year, in fact it may rank very low if I were to take the time to rank my years on Earth. Pros: sister got married, bought a car. Cons: Dad lost his job after 41 years, I got dumped by the girl I was sure I was going to marry (on the day of a Social Distortion concert of all days), lost my grandmother, had surgery for the first time, band broke up, joined another band that broke up. Bring on 2010!!! Happy New Year to all y'all. Aboriginal Noise with 4 M's and a silent Q 18:32, 31 December 2009 (UTC)
- I love New Year's Eve! Going to get my champagne on, going to dance and hot-tub at a party... fun times. You need to go to better parties if all you meet are jackasses.--Tom Moorefiat justitia 22:40, 31 December 2009 (UTC)
- WAHOO! It's finally next year! Happy New Years to all! Tetronian you're clueless 05:51, 1 January 2010 (UTC)
Avatar Rehash[edit]
So I finally got out and took in Avatar..... I will withhold my review, but I will say that my expectations were fulfilled in another area: The Fundies Hated. For the enjoyment of the mob, here is the Christiananswers.net Review of James Cameron's Avatar Also for your enjoyment, check out their review of Walt Disney's Princess and the Frog. SirChuckBHITWIN FOR PRESIDENT! 13:34, 31 December 2009 (UTC)
- I must say tat I quite liked it in 3D--BobBring back the hat! 18:54, 31 December 2009 (UTC)
- I was at a local "lots of shopping" place last night hitting Staples up for some reinforced tape, and there was no room at the inn. The parking lot was filled almost to capacity in front of pretty much every store. All I could figure is that the multiplex behind the shopping zone had Avatar on nine screens or something. Are there any other blockbusters out right now? ħuman 21:30, 31 December 2009 (UTC)
- What else did yo expect from a site that rated goddamn Blade Runner and There Will Be Blood "extremely offensive"?--User:Theautocrat/Sig 02:40, 1 January 2010 (UTC)
- Their review of "Dogma" wasn't as scathing as I'd thought/hoped it would be. Still "extremely offensive" though. Aboriginal Noise with 4 M's and a silent Q 03:15, 1 January 2010 (UTC)
- If you want a treat, just read their review of "Golden Compass". Absolutely hilarious. --User:Theautocrat/Sig 03:41, 1 January 2010 (UTC)
- The one that will always have a special place in my heart is Happy Feet. Yes, the movie about the dancing Penguin.... Apparently it was sent directly from Satan to corrupt our children, and no, I'm not making this up.... I also love the comments sections, it's a real poe test. SirChuckBHITWIN FOR PRESIDENT! 05:23, 1 January 2010 (UTC)
- It was a very political movie, but I'm surprised people went that far. Tetronian you're clueless 06:05, 1 January 2010 (UTC)
What Darwin Never Knew: A Creationist's Nightmare[edit]
Did anybody happen to see the PBS NOVA special What Darwin Never Knew? It was really interesting, as it discussed how modern genetics is being used prove that life evolved from the same sets of genes, except in a more informational, better researched format than this sentence. I highly recommend, if you didn't see it, that you watch it. It can be watched here. Again, I highly recommend it! Lord of the Goons The official spikey-haired skeptical punk 23:03, 30 December 2009 (UTC)
- Was it specifically anti-creationist, or just "here's some interesting science" in it's presentation?--Mustex (talk) 03:12, 31 December 2009 (UTC)
- It was only anti-creationist if you believe proving evolution to be anti-creationist. Otherwise, there was no mention of creationism except that, when Darwin proposed the Theory of Evolution, it went against the beliefs of the time. Otherwise, it is mostly about how genetics has, essentially, proven Darwin's theory and how it filled in the gaps that Darwin himself couldn't answer. Trust me, it's a very interesting documentary, even for those who already had a grasp for the proof behind evolution. My titling of this post as "The Creationist's Nightmare" is just a swipe at creationists who make internet videos talking about this or that being "the evolutionists' nightmare". Lord of the Goons The official spikey-haired skeptical punk 03:25, 31 December 2009 (UTC)
- Ah, ok, my initial reaction was "Does anyone other than a creationist not know that genetics confirm evolution?" so I wasn't sure.--Mustex (talk) 15:08, 31 December 2009 (UTC)
- A lot of Dawkin's last series on Darwin was like that. He did a lot of genetics and going around genetics labs to look at the equipment (not in massive detail, I should add) but he certainly put the "anti-creationist" slant on it, which is to be expected considering. But when biologists get all doe-eyed and say that genetics is one of the most beautiful confirmations of an old theory ever, they're not wrong. sshole 08:19, 2 January 2010 (UTC)
I now hate Disney[edit]
Ok, a little while ago I was channel surfing and noticed that "Princess Protection Program" was on. I was like "Oh, yeah, lame jokes about a Princess in hiding trying to blend in and acting very stereotypical. Probably be good for a few groaners, and a post-movie rant about how illogical it was." For that most part, it lived up to those expectations (She didn't know what a hamburger was? Doesn't she have tv? Hell, they serve hamburgers in some fancy restaurants, at most polite society would have taught her to cut it in half. And what evil dictator would actually be stupid enough to PERSONALLY fly to the United States, which is offering sanctuary to his enemies and thus is clearly not on his side, and take armed troops into a High School prom. No government in the world would stand for that, us least of all!), but that's beside the point. The point is this: Why did they make one of her defining "weird" traits her ability to speak six languages? Granted, it probably would have made more sense to say her parents were in the army so she'd been all over the world, rather than just telling people she was "from Iowa," but is Disney actually suggesting that if kids try to learn new languages they'll be outcasts!?! The most disturbing part of this is that the girl watching out for her got mad when she spoke Spanish to a hispanic lunch lady. Yes, because we certainly can't stand to learn the second most common language in our own country, and those of us who speak it fluently should never, EVER speak it as a courtesy to someone whose first language is Spanish! Fuck you Disney!--Mustex (talk) 03:12, 31 December 2009 (UTC)
- Hehehe, I think this is the first tirade you've gone on here, Mustex. But I agree. Only in the United States are we so ethno-centeric that we are afraid to learn more than English. I, myself, speak 3 languages, and find it disturbing that most Americans barely know their first language. AnarchoGoon Swatting Assflys is how I earn my living 03:20, 31 December 2009 (UTC)
- I used to think I was fluent in obscenity, learned from some of the best. That got too easy, so now I'm working on my sly vulgarity chops. When I had a lot of non-anglophone co-workers, I figured the most courteous thing I could do here in the USA was to speak plain English to them. If I got a look that said ¿qué? I would shift to my broken Spanish, or sometimes Portuguese. Pretty soon they got used to it, and played along the way each one was most comfortable. Sprocket J Cogswell (talk) 03:36, 31 December 2009 (UTC)
- (1) It did not used to be common practice for people to know more than one language, and when one did learn other languages, it was a practical matter, enabling one to communicate while visiting the countries where the languages were spoken.
- (2) If immigrants are to be integrated here, they should have a good knowledge of the language of the land, and the only way they can get that is to be made to use it. ListenerXTalkerX 03:41, 31 December 2009 (UTC)
- True. I suppose, having been zu die Schweiz a couple of times, I just kinda think that Spanish should be made a second official language here, especially since the Latino population will, eventually, outnumber the caucasian population here in the United States. Punky Your mental puke relief 03:46, 31 December 2009 (UTC)
- There is no "official" language of the united States. In theory, the government could conduct their business in Basque and noone could stop them. That's why you will occasionally see petitions to make English the official language, so that the kid's of illegal immigrants (who are citizens) couldn't go to schools that only taught in English. --User:Theautocrat/Sig 03:51, 31 December 2009 (UTC)
- (EC) Most of those Latinos, however, will have been born here and will speak English natively. There is a German-American plurality in the country at the moment, but making German an official language is not discussed. ListenerXTalkerX 03:54, 31 December 2009 (UTC)
- Map of US majority ethnicity by state
- Sprocket J Cogswell (talk) 03:58, 31 December 2009 (UTC)
- See here. Tetronian you're clueless 04:01, 31 December 2009 (UTC)
- When the second paragraph of the "plot" subsection on Wikipedia starts with "inexplicably", you know you're onto a winner. sshole 08:22, 2 January 2010 (UTC)
Hovind's Thesis Redux[edit]
Patriot University has commented on Hovind's "thesis". I can't decide what the best part is; that the university does not have a copy of his thesis, that they think Wikilinks is part of Wikipedia, or the musical montage. - π 04:38, 2 January 2010 (UTC)
- You mean wikileaks, not wikilinks, and yes, that is so fucking funny we should put it on our main page. ħuman 04:47, 2 January 2010 (UTC)
-
- I'd have to agree with Huw here. That's is comedic genius on a completely different level. Lord Goonie Hooray! I'm helping! 04:49, 2 January 2010 (UTC)
- (ECx2) You mean that article or the paper itself? If the paper, the problem is it wasn't the real thing, just the "rough draft". Which sucks, really, since I spent so much time reading it and bashing my head against the wall. Tetronian you're clueless 04:52, 2 January 2010 (UTC)
- (ECx9)"We at Patriot are astounded over this world-wide search for Hovind’s dissertation; website traffic from 93 countries!" "Hovind’s dissertation was part of a graduate “project”. Thus, the paper being posted online was only a portion of Hovind’s initial research notes for his dissertation requirements. It is obviously not a finished product." Obviously. What's so funny is that he doesn't have a "finished product" to present to counter the utter mockery and contempt he is getting for "His work since 1991 has been widely distributed and stands on it’s own and supercedes an earlier written dissertation." Nice find, Pi guy. ħuman 04:53, 2 January 2010 (UTC)
- "Eternity is longer than the 17 billion years evolution claims for the present age of the earth." Highly amusing, that level of incompetence. And, yes, audio with no "stop" button is a lower level of incompetence, but clueless nonetheless. ħuman 04:53, 2 January 2010 (UTC)
Apparently Ronald Regan was one of the founding fathers. - π 05:05, 2 January 2010 (UTC)
- I found the typos amusing. The title has one, and throughout the article they show that they've no idea how to use apostrophes. Their attempt to characterise criticism of Hovind and Patriot as some kind of hatred is amusing. By their logic our article on Expelled is an example of anti-semitism. Yup, nice find Pi. --Ask me about your mother 10:05, 2 January 2010 (UTC)
God damn (rant below)![edit]
Okay, so as I have no family inside of 1000 miles from me, I have a long standing tradition of buying myself my own presents for Christmas. Among them, I got AC2. I thought it was a pretty cool game, nice graphics, good story (insert obligatory assassin's Creed praise here). Then, after I beat it inside a week, I though, "okay, that's cool", put it on a shelf, and will maybe play it one or two more times before I forget about it, and move on. After I beat an amazingly gorgeous game, I then proceeded to whip out my cell phone and begin gaming away on another game, nameley Sonic the Hedgehog 2, a game from what, 1990? Then, it hit me. New games just aren't fun anymore. I derived many times more enjoyment from failing at a crappy port of a platformer older than 20% of the population that I did from totally owning at a hot, cool new game that was 10 times the price. What the hell is wrong with me!??!?! My "best games" list are all either over a decade old, or insane obscure indie games. All of these new games I find either boring, repetitive, or unengaging. I suppose this is just because games nowadays are just too damn easy. The single greatest game I have ever played was Rogue (keep in mind, the graphics looked like what you would get if you held the shift key and mashed your keyboard a few times). I have never beaten it, after nearly two decades concerted effort (Not quite true. I have beaten Rogue Touch once, but that is not a faithful port). It's the same with movies. Avatar was gorgeous, but my favorite movie is still Ingmar Bergman's The Seventh Seal, which is black and white. Is this just a symptom of me getting old, or has everyone experienced this at some time before? --User:Theautocrat/Sig 03:59, 26 December 2009 (UTC)
- Are you kidding? In my opinion the two best games ever are Starfox and Super Smash Bros. for Nintendo 64. And most of my favorite movies are pretty old too. And I'm still in public high school. Tetronian you're clueless 04:06, 26 December 2009 (UTC)
- I still play Doom, and have played no games that have come out since then...TheoryOfPractice (talk) 04:49, 26 December 2009 (UTC)
- Asteroids. Only game I ever cared for. Can still play it in my dreams. ħuman 04:52, 26 December 2009 (UTC)
- If you like Asteroids, have you ever tried Echoes? It's a bit on the shiny side, but it's really fun. Personally, I play WoW (yeah, yeah) and a variety of DS games and that's really about it. Liked the original Doom and Doom II a lot, though, and I'm fond of point & click games like Machinarium. --Kels (talk) 05:00, 26 December 2009 (UTC)
- Heh, "Yeah, it's a bit like Asteroids hyperactive, drug crazed brother displayed in blur-o-vision© and viewed through psychedelic sunglasses in a cheap nightclub." I just liked the game. Four buttons (go, left, right, shoot) and four "enemies" - big rocks, small rocks, big slow spaceship, small fast spaceship. Twas fun, it was. Many decades and quarters ago. ħuman 05:14, 26 December 2009 (UTC)
- Best games: Starfox 64 (1997), Baldur's Gate II (2000), and Max Payne (2001). I've played all kinds of newer games and older games, but I always return to these exemplars of the flight, RPG, and shooter genres. There's nothing wrong with being able to appreciate something done right.--Tom Moorefiat justitia 07:52, 26 December 2009 (UTC)
Oh, on the movie side of things, while I like a good spectacle now and again, I must say the movies I really respect are stuff like old Kurosawa, Citizen Kane and that ilk, and European art house stuff. But I've always loved that sort of thing, even when I was in my teens. So I dunno if it's getting old as being able to recognize good shit when you see it. --Kels (talk) 05:22, 26 December 2009 (UTC)
- I have fond memories of a bunch of Acorn Electron games from my childhood in the 80s (Chuckie Egg, Gisburn's Castle, Droid, Repton 3). Last year I found some Electron emulator software & some of these games online, & got addicted to them again for a while, but for some reason it made my laptop overheat, drove the fan really hard & knackered the battery, & neither have been the same since. :-( Regarding films, there are some I really love from the 50s/60s/70s (Kurosawa samurai films, Sergio Leone spaghetti westerns, Werner Herzog art house stuff) but I don't really watch many films much older than that, & a lot of my favourite movies are from the 90s & 00s. WèàšèìòìďMethinks it is a Weasel 11:38, 26 December 2009 (UTC)
- There is only one game I ever found interesting. But I'm a seriously oldpharrt. Once invited a guy to go sightseeing in a Skyhawk, some RL flying that is, and he said, sounding like Dracula (couldn't help it, he was Rumanian) "OK, I trust you. I've watched you play Tetris." Sprocket J Cogswell (talk) 16:44, 26 December 2009 (UTC)
That's only because Sonic 2 is a very good, classic platformer (Although irritatingly difficult in the later levels, reason why I find Sonic 3 & Knuckles the best game in the series.) that has therefore stood the test of time remarkably well.Even in the 90s, there were a lot of crappy games made and I swear the good/bad rate was about the same as today, not to mention that the games with pretty graphics were very shallow in gameplay even back then (Donkey Kong Country). Additionally I don't understand why the fact that Sonic 2 costs considerably less today than when it was just released should have anything to do with its quality. Liking obscure indie games though is just called being pretentious. Vulpius (talk) 21:00, 26 December 2009 (UTC)
- Starfox 64 (old), God Hand (obscure) and Persona 4 (obscure at least around here) are my favorite games of all time. I actually would prefer playing them to alot of the new games (if only I still had my N64)--Thanatos (talk) 02:54, 27 December 2009 (UTC)
- The N64 is undoubtedly the best game system ever devised. And it has the most unique (and ergonomic) controller of all time. Tetronian you're clueless 22:59, 27 December 2009 (UTC)
- I have been playing GTA:4 (San Andreas is still king) and AC2 (just can't get the hang of it). Still, Half Life 2 and Doom 3 are my tops. Aceof Spades 23:44, 27 December 2009 (UTC)
- Starfox and Goldeneye on the N64 - Best games ever. I agree that most games these days just have a kind of linear path through them and then you're done. Older games had something about them that made you want to go back again and again. I suspect it's the fact that you could just start from somewhere that you liked before (e.g. like on the Donkey Kong Country games) whereas the games these days are story based and so you'd have to go through all the dull shit and videos to get anywhere interesting. CrundyTalk nerdy to me 16:55, 29 December 2009 (UTC)
-
- Ah, I forgot about Goldeneye! Probably the best Bond shooter of them all, although "Nightfire" for PS2 isn't bad either. Tetronian you're clueless 15:26, 31 December 2009 (UTC)
- Goldeneye was great- the multiplayer mode was amazing. My favoritist games were on the NES, though. SMB3, Mike Tyson's Punchout, Final Fantasy, Dragon Warrior, and Crystalis. Corry (talk) 06:15, 2 January 2010 (UTC)
- You just had to talk about gaming, didn't you? Then again, I got my start in gaming when I was 4. My dad brought home an old Commodore 64 with some games he found in the junkyard, did a little bit of soldering, and booted it up. After that it was a matter of catching up and steady progression.
- I think the main draw as to why people like older games is their simplicity. There's no status screens, no maps, no side missions or annoying support cast to get in between you and the game. Then again, games can be used as a parallel to our overall technological advancement. 25 years ago the Nintendo was the height of home entertainment. We have come from 8-bit sprites and 30 screen frames of a level to high definition rendered models and kilometers of space per map, with multiple maps as standard in a quarter of a century. Game Narrative, sound composition and content also continue to improve at an alarming rate, with games now costing as much as some major motion pictures with years of development time put into them. It reminds me of the fact that a scientific calculator of today has more processing power than the computer that was in the Lunar Lander. On another note, i'm proud of you guys. Godhand? Crystalis? Goldeneye? People actually know about good games here! Then again, I play Klonoa for fun...i'm an Alpha Nerd. -- CodyH (talk) 01:52, 3 January 2010 (UTC)
- Klonoa!! I adore Klonoa, although I do get a giggle about what the pitch meeting must have been like. "Uh yeah, so it's about this dog kid who's a big Pac-Man fan and he inflates stuff...yeah, inflates..." Although when I played through the first time, I found the optional quest at the tower to be diabolically hard. Spent hours on that thing. --Kels (talk) 02:03, 3 January 2010 (UTC)
- I found the story deals with a lot of dark material. Betrayal, revenge, murder, deception, and that's just the first game. Later on they introduce a drunk psycho who puts your character in a coma...yeah, great material for kids.
- Another game I recommend is Cave Story. Easy to find, and there's a translation. It is fun, simple, and has one of the hardest 'secret' levels I have ever survived. Not as hard as THIS, however. -- CodyH (talk) 02:32, 3 January 2010 (UTC)
- God powers keep my pimphand strong. I love the ending song.--Thanatos (talk) 03:16, 3 January 2010 (UTC)
- Never did finish Cave Story. I couldn't beat the witch chick, even after a lot of tries. --Kels (talk) 03:37, 3 January 2010 (UTC)
- Shin Megami Tensei II is pretty advanced for a old SNES game. I really liked the 3D design of the dungeons. Was never released in NA because of the religious overtones (the final boss is God. You can team up with Satan or Lucifer to kill him. I have never been able to beat him. Cheap bastard)--Thanatos (talk) 03:45, 3 January 2010 (UTC)
Wikipedia has an SPOV, too![edit]
If this is too serious for the Saloon bar, just tell me where to go so I don't make an ass of myself. I admit I'm new and I'm already getting my feet wet.
So I just got done reading the "Community Standards" article, and I took note of this. I think Wikipedia also has a de facto policy similar to SPOV (in the second sense) on here.
I know how to test this out, too. At the risk of getting blocked on Wikipedia (again, lol), I might make edits on there that fly right in the face of the scientific consensus on... anything. Preferably something that has a lot of political or religious contention (like global warming or evolution respectively), so I can find some fairly mainstream sources that I can cite to make the edits seem somewhat legit.
From there, I'll wait to see how long it takes before the edit gets reverted. I bet it will. No, seriously, I really think it will happen.
So what do you think? Sound fun? Macai (talk) 21:14, 29 December 2009 (UTC)
- It will certainly get reverted; it's happened to people like Andy and Ed already. Tetronian you're clueless 21:23, 29 December 2009 (UTC)
- Well, I'm glad that someone agrees with me, but... may I ask who Andy and Ed are? Did someone else make this observation already, or something? Macai (talk) 21:24, 29 December 2009 (UTC)
- It doesn't sound particularly like fun. And I don't think we'd want to give the impression that we were in favour of people making frivolous - or borderline vandal - edits at WP.--BobBring back the hat! 21:44, 29 December 2009 (UTC)
- @Macai: I mean Andrew Schlafly, founder of Conservapedia, and Ed Poor. Tetronian you're clueless 01:03, 30 December 2009 (UTC)
- This plan sounds even worse than one of Mustex's and he has some terrible plans. - π 04:31, 30 December 2009 (UTC)
- Why don't you tell me what's so terrible about it? And please try to refrain from "it's common sense and you're an idiot, so in conclusion your idea is terrible" type comments. Macai (talk) 05:12, 30 December 2009 (UTC)
- @Τe†rоиіαn: I figured that out a few minutes after you mentioned "Ed". I'm not really into Conservapedia, but after reading a few articles on here, it seems that it's a major topic. Macai (talk) 05:41, 30 December 2009 (UTC)
- It certainly is a major topic here. But my point is that they have tried to insert "unscientific" information into Wikipedia, and they failed miserably. That's why I think your experiment is useless - we already know that the Wikipedian community in general is pro-science, or at least regards scientific knowledge as trustworthy. Tetronian you're clueless 05:54, 30 December 2009 (UTC)
- Macai: basically it's a stupid idea. There's a zillion nutters already sticking non-scientific crap in all sorts of articles & getting stamped on. Unless you really believe something and have a lot of whacky "facts" at your fingertips, you're gonna get stomped straight away - why would you want that? I am eating & honeychat 06:04, 30 December 2009 (UTC)
- Well, the point would be to demonstrate that Wikipedia has the same "SPOV", in the second sense, as this site does. I mean, I don't even have to say anything that is untrue, just something that could possibly be interpreted to mean that the scientific consensus is wrong by some people. An example would be that global warming, so far, has peaked in 1998. This doesn't mean that there weren't mitigating factors causing 1998 to be the hottest recorded year so far, nor does it necessarily imply that 1998 will never be topped. However, I bet you anything matter-of-factly stating that 1998 is the warmest recorded year in "global warming"'s lead will be reverted, and fast. Why? Because there are people out there who will read it and think, "Ah hah! I knew global warming was horse shit, and now I have the proof!" Whether this is proof or not is irrelevant; Wikipedians don't want that idea getting into their heads thanks to their articles, and if that means omitting information, then so be it. For this reason, it's reasonable to believe that they are pushing a perspective. That perspective being the scientific consensus. Just like RationalWiki. Macai (talk) 06:53, 30 December 2009 (UTC)
- 1998 might be the hottest year on record but the overall trend is up, even from there. All you would be proving with that edit is you don't understand statistics. - π 07:00, 30 December 2009 (UTC)
- Hi Macai. If you feel that there problems with WP's NPOV then the best place to take it up would probably be WP. While there you could presumably make whatever edits you wished on your own behalf to make your case. But I don't see any great groundswell of opinion here to take part in any such project.--BobBring back the hat! 07:04, 30 December 2009 (UTC)
- Wikipedia has policies against this sort of experimentation: WP:POINT and WP:HOAX. ListenerXTalkerX 07:08, 30 December 2009 (UTC)
- The first one in a nutshell: "If you disagree with a proposal, practice, or policy in Wikipedia, disruptively applying it is probably the least effective way of discrediting it – and such behavior may get you blocked." I'm not disruptively applying anything. In fact, all I'm doing is causing them to disruptively enter into a revert war if I choose to press the issue. The second one in a nutshell: "Do not deliberately add hoaxes, incorrect information, or unverifiable content to articles." I'm not adding a hoax, incorrect information, or unverifiable content. Besides, even if I was violating Wikipedia's policies, that wouldn't make my assertions factually inaccurate. Macai (talk) 07:20, 30 December 2009 (UTC)
- @Bob M: I happen to have no problem with Wikipedia's NPOV policy. I have no problem with anything, here or there. Where in this thread of discussion did I claim to have a problem with their policies? You must be a psychic or something, because I never asserted that their position was wrong, just that their position is the same as RationalWiki's, and provided a damn good way to test that claim out. Macai (talk) 07:25, 30 December 2009 (UTC)
- @Π: What relevance does the overall trend have to do with anything? The edit happens to make no claim that global warming is not happening, or that it's not caused by humans. So in actuality, I'd be proving that they want to omit completely innocuous information because they think it might sway someone's opinion in the "wrong" direction. It just presents the current warmest year on record in case someone might want to know. You know, kind of like how they do the same thing in the "human height" article by saying that Leonid Stadnyk is the tallest living man. Macai (talk) 07:15, 30 December 2009 (UTC)
- You would be omitting information too (that the overall trend is still going up) and implying that global has reached its peak (which you admitted was your goal). -- Nx / talk 10:19, 30 December 2009 (UTC)
- Also it is already mentioned in the article. - π 10:44, 30 December 2009 (UTC)
- As I said before - I am sure you are free to do at WP whatever you personally feel is appropriate. It just doesn't look like you're getting any help or support here.--BobBring back the hat! 07:27, 30 December 2009 (UTC)
- What I am getting, however, are strawman arguments, non sequiter responses, and baseless dismissals of the idea. I mean, this is RationalWiki. You'd expect the people on here to be... well... rational. I guess there's just no shortage of irony these days, though. Macai (talk) 07:49, 30 December 2009 (UTC)
- Nobody is interested in what you put in Wikipedia articles, do it or not, nobody cares. - π 09:00, 30 December 2009 (UTC)
- Macai, just do it and tell us how it comes out. Tetronian you're clueless 16:20, 30 December 2009 (UTC)
"For this reason, it's reasonable to believe that they are pushing a perspective. That perspective being the scientific consensus." And is that some sort of problem or something? That's what I would expect an encyclopedia to present. Would someone rickroll this delightful character already? ħuman 19:39, 30 December 2009 (UTC)
- I just don't see the point of the experiment. All you will do is waste time and effort. You'll only prove what is already pretty apparent. If you want to experience WP vetting and editing practices first hand all you need to do is observe a collection of pseudo science or conspiracy theory articles for a period of time. The sorts or science vs. woo vs. superstition debates and activities are on-going and long standing. Also, and I don't mean this as offensively as it sounds, aside from common courtesy there is another de facto policy in place over at WP that should prevent you from making erroneous edits, that is "Don't be a dick" which is precisely what you would be doing if you inserted content you knew to be bad into WP for any reason. It's a wonderful reference source for Christ's sake. Show some respect. Me!Sheesh!Mine! 16:00, 31 December 2009 (UTC)
Pick a better movie than Avatar for comparison.WilhelmJunker (talk) 16:56, 2 January 2010 (UTC)
Fairwell internet explorer![edit]
I abandon you for Google Chrome! Oh brave new world, that has such people in it (and faster while at it...) ĴάΛäšςǍ₰ thinking of what to say next 04:30, 31 December 2009 (UTC)
- I have to ask: were you quoting the Shakespeare line directly, or were you quoting Aldous Huxley's "Brave New World" (or perhaps Edwin Abbot's "Flatland") which references the Shakespeare line? Tetronian you're clueless 04:33, 31 December 2009 (UTC)
- (EC)Just be sure to use Ccleaner after you delete Internet Exploder, or else it will stick around in your system, whether you like it or not. The Goonie 1 What's this button do? Uh oh.... 04:35, 31 December 2009 (UTC)
- Don't delete IE! You will still need it on occasion, and it's a good backup.
- Chrome is way better, by the by. It's what I use. I especially like the developer version since I can add the RSS and Gmail extensions.--Tom Moorefiat justitia 04:37, 31 December 2009 (UTC)
Miranda,
"O, wonder! How many goodly creatures are there here! How beauteous mankind is! O brave new world That has such people in't!"
- And I am not going to clean IE completely from my system, it is there, just in case I need it. And as for Firefox, used it, tried it, compared it, and liked Chrome more than all the others. ĴαʊΆʃÇä₰ no really. 04:42, 31 December 2009 (UTC)
- Thanks. And I have to try Chrome, I use Firefox but I admit I haven't tried many other browsers. Tetronian you're clueless 04:44, 31 December 2009 (UTC)
- (EC) I hear good things about it, although I am not at all a fan of its "blurred distinction between 'online' and 'offline'" design philosophy. Perhaps sometime I will compile the open-source version, Chromium, and give it a whirl.
- The first Web browser I used was Lynx, in prehistoric days. Then Mosaic, then Netscape, then the Mozilla Suite (from beta on up), and then Firefox. ListenerXTalkerX 04:52, 31 December 2009 (UTC)
- You can't use Windows Update without Internet Explorer. - π 05:20, 31 December 2009 (UTC)
- Chrome is the shit. However I can;t figure out how to download links, so I use IE for that, and I have Firefox for screengrab. Chrome all the way, baby. --User:Theautocrat/Sig 05:41, 31 December 2009 (UTC)
- In Chrome: Right click and choose "save link as."--Tom Moorefiat justitia 05:44, 31 December 2009 (UTC)
- Chrome is open-source as well. "Chromium" is just the open-source release of all Chrome code. I believe it's done that way for copyright purposes.--Tom Moorefiat justitia 05:43, 31 December 2009 (UTC)
- Besides Google branding, Chrome contains some proprietary stuff for automatic updates. ListenerXTalkerX 05:55, 31 December 2009 (UTC)
- Google's nasty habit of installing a software update daemon is what puts me off using any of their stuff on my Mac. It's fair enough for applications to check for updates on launch, but there's little justification in having a background process that's always running. --Ask me about your mother 15:35, 31 December 2009 (UTC)
- Which is why you'd use chrome/chromium. --91.145.89.78 (talk) 16:21, 1 January 2010 (UTC)
- I also recently switched to Chrome. I stayed with IE a long time but the last update did some really annoying things, especially for wiki-editing, & having made the change I'm now glad to be rid of it. Personally I've never much liked how things look in Firefox. Chrome seems pretty neat. WèàšèìòìďMethinks it is a Weasel 18:15, 31 December 2009 (UTC)
You need to keep IE for sites that were written by noobs but have vital data. I cite this, where to get instructions to mount a second-generation Fisher plow on my 1978 Chevy truck I had to use IE, and the info was invaluable, if not essential. ħuman 08:39, 1 January 2010 (UTC)
Π: Vista and 7 don't use IE to access Windows Update. They use a separate program for that.
TomMoore: Pretty sure Chromium is under a BSD license, which doesn't require the source code to be released as with Firefox's licenses, so the release of the source code isn't for copyright purposes. --GastonRabbit (talk) 02:54, 3 January 2010 (UTC)
Ethical dilemna - please vote/opine[edit]
So I'm going to be as vague here as needed, but I'll try to give the relevant information. Essentially, I was trolling a racist forum. I sign up as a user, make a couple generic racist posts, nothing too eyebrow raising. Then I thought, what is the likelihood of some of these users using certain racist slang as their password. Lo and behold, I hack into the account of one of the major contributors. Then by finding out information there, I piece together that this guy is a senior VP for a decently sized company (his company got bought out), is a Catholic, and has donated a lot to the NRCC. So what do I do? Do I out this guy in some clandestine way? ConservapediaEditor (talk) 07:44, 31 December 2009 (UTC)
- My philosophy is that one should not say anything anonymously on the Internet that one would not care to shout in public from a podium, so there is no ethical reason why you should not blow this fellow's cover, if you have not expressly agreed to keep it. However, by the same tokens, if you are going to out him, you should also be willing to take any legal or other penalty for hacking his account. ListenerXTalkerX 08:02, 31 December 2009 (UTC)
- At least do everything from a library computer.--75.107.29.178 (talk) 17:21, 31 December 2009 (UTC)
- You're a very naughty boy. I can't condone hacking other people's accounts but now that you have you might as well put it to some good use. But take care, these people can be really nasty - Catholic or not. ГенгисRationalWiki GOLD member 17:27, 31 December 2009 (UTC)
- All this is very dodgy, ethically. After all, you yanks are all about free speech, however unpleasant. Unless this guy has done something illegal I'm not sure you can justify outing him. It's difficult to judge without knowing exactly which site you're referring too but my gut reaction says 'no'. Bob Soles (talk) 17:29, 31 December 2009 (UTC)
- The only problem I see is that the guy could also be a deep-cover parodist (unless you somehow have very compelling evidence to the contrary). But I guess Catholics aren't allowed to do the parody shit? (Then again, I've assumed identities of real random people with socks before.) Out his ass. Maybe wikileaks? — Sincerely, Neveruse / Talk / Block 17:30, 31 December 2009 (UTC)
- "After all, you yanks are all about free speech, however unpleasant. Unless this guy has done something illegal I'm not sure you can justify outing him." Freedom of speech does not mean freedom to speak anonymously. It does, however, mean the freedom to speak the truth (such as the identity of this fellow). ListenerXTalkerX 17:50, 31 December 2009 (UTC)
- I'm not sure whether this is a real situation or a hypothetical one CPEditor has created to sound out RWians' ethical attitudes. Anyway, I think outing this guy as a racist would be reasonable, if it could be done safely. Outing peoples' sexuality or private life is a different matter, but if somebody is in an influential position and is actively racist, it should be brought to light. Also, the guy must know the risk of exposure when he gets involved with that kind of site. The real problem is the danger you'll be putting yourself in when you out the guy - that you could risk being traced, losing your privacy, suffering some kind of retaliation. Be very careful not to do anything that could lead these guys to you. Or to RationalWiki; we really don't need that. WèàšèìòìďMethinks it is a Weasel 18:47, 31 December 2009 (UTC)
- You could send the story to a newspaper, along with his username and password, and either request anonymity or make an e-mail account specially for the purpose. EddyP (talk) 18:58, 31 December 2009 (UTC)
- And are you sure that the two are one and the same? As in, more evidence than their haveing the same name. EddyP (talk) 19:00, 31 December 2009 (UTC)
EC Tough one. You got the information if not illegally, then certainly unethically, and I'm not sure how I feel about "outing" someone, given that. To me, that aside, it might boil down to what dude says online--is he just making vague "I hate the (minority of your choice)" comments, or is he crossing the line into unadulterated hate crime--"They all should die, and here's how we should do it" stuff? death threats are not covered by free speech, AFAIK....TheoryOfPractice (talk) 19:04, 31 December 2009 (UTC)
- I agree with neveruse (the guy might be a "plant" just like you are) and EddyP - the appropriate step, if any are to be taken, is to actually prove it is he. A good reporter will essentially ignore the username/pword and posts (as inappropriately obtained information), but might look at the guy's real life activities and see if there is a legitimate story that can be backed up with appropriately obtained information. ħuman 21:15, 31 December 2009 (UTC)
- Paranoia is working with you. If you want to do something on the internet and have no risk of getting caught you must do it. --208.75.212.156 (talk) 16:14, 1 January 2010 (UTC)
- I would check his position in the company first. Is he in charge of hiring or firing people? What ethnicity are his employees? Are their token members in his staff? Those kind of questions. Maybe get in contact with a few employees first. He might leave his racism at the door for all you know. If he doesn't, you have my blessing--Thanatos (talk) 17:42, 1 January 2010 (UTC)
- Human is right. You got some info by very dubious means, so you have to ignore it and try and substantiate the claim legitimately. It's kind of like when the police might catch out a drug dealer with entrapment; they can't use it as evidence because it's illegitimate and unethical, and hacking someone's password is equally unethical. Going ahead and "outing" someone is your own decision and you have to deal with it in the end, however. sshole 08:14, 2 January 2010 (UTC)
I believe what you did is unlawful and that you should stop now. 16:40, 2 January 2010 (UTC)
- I must admit that had crossed my mind as well. I am presuming that the computer system in question is based in a country where unauthorised access is illegal. Lily Inspirate me. 16:57, 2 January 2010 (UTC)
- Is it really unauthorized access if you login (!, you knew the username+password, its as much your account as it's anyone else's) to an account that doesn't consist of much more than a username and a signature? I wonder if any ISP keeps extensive enough logs that they could disprove you if you claimed you ran a proxy and burned your hard-drive to random noise after you suspected it was hacked? --91.145.73.149 (talk) 18:49, 2 January 2010 (UTC)
- That is putting far too much thought into it... Anyway, it's also an ethical issue as much as a legal issue. sshole 19:35, 2 January 2010 (UTC)
- I would imagine that obtaining the information in that way would be illegal. Using it would also be illegal if only for copyright reasons. I presume this is why you are considering using it in a "clandestine" way - so as to avoid any legal come-back to you. So the legal case is clear. As for the ethical one - could you clarify your motivation? Is it to damage the person? To make the world a better place? To have fun at his (?)expense? To just see what happens? To support your political position? To get his job?--BobBring back the hat! 21:42, 2 January 2010 (UTC)
- Cough* Wikileaks. NorsemanCyser Melomel 21:48, 2 January 2010 (UTC)
- in my location your hacking the account is a criminal matter. Ethically you should have known it was improper conduct. Since neither action seems to bother you , why should you have a problem using that information. Bundle it up and sell it to a muckraking newspaper. Hamster (talk) 22:07, 2 January 2010 (UTC)
Who is More Admired?[edit]
Glenn Beck or the Pope?
The answer may surprise you. MDB (talk) 14:52, 2 January 2010 (UTC)
- Is this to say that the population of crazy people and the population of catholics doesn't have as big an intersection as you might at first think? --JeevesMkII The gentleman's gentleman at the other site 15:17, 2 January 2010 (UTC)
- Holy fuck, that's scary stuff. Although I suppose that since the US has far fewer Catholics then Protestants, comparing Beck's popularity to the Pope's in the US is very different than it would be in other countries. Tetronian you're clueless 16:40, 2 January 2010 (UTC)
- "...was also more admired by Americans than Billy Graham and Bill Gates, not to mention Bill Clinton and George H.W. Bush. In Americans' esteem, Beck only narrowly trailed South Africa's Nelson Mandela, the man who defeated apartheid." Never mind the Pope. TheoryOfPractice (talk) 16:44, 2 January 2010 (UTC)
- Not really "far fewer". "Evangelical percentage of the population at 26.3%; while Roman Catholics are 22% and Mainline Protestants make up 16%." Wp. Andy's nominally a RC (?) but who would he vote for? I am eating & honeychat 16:54, 2 January 2010 (UTC)
- I am waiting for Andy to be excommunicated.--Thanatos (talk) 17:29, 2 January 2010 (UTC)
- Heh! very difficult. I forget which Catholic atheist comedian (?) said: "A catholic who joined the Taliban would only be a a bad catholic" I am eating & honeychat 17:37, 2 January 2010 (UTC)
- Would be funny though, and Andy would lose alot of support from the good Christians on his site. Goes from religious(snort) man to insane cult leader instantly--Thanatos (talk) 18:34, 2 January 2010 (UTC)
- Unfortunately the article is subscription only for me and I have never heard of Glenn Beck so my options for commenting are limited.--BobBring back the hat! 19:24, 2 January 2010 (UTC)
Battle Royale[edit]
I finished reading it today and I have added it to my top 3 novels besides Salem's Lot and Black Like Me. The one thing I have to say about it is that despite the heavy violence, I could see conservatives pushing this book (esp now that Obama is pres).--Thanatos (talk) 06:40, 2 January 2010 (UTC)
- Havent read it but didn't really get into Salems Lot - much prefered The Shining. Acei9 07:08, 2 January 2010 (UTC)
- I never read the book, but the film is a Japanese modern classic and I have it on DVD.--Tom Moorefiat justitia 22:07, 3 January 2010 (UTC)
Ben Stein Commits the Greatest Sin Imaginable for a Republican...[edit]
... and call for higher taxes on the wealthy, like him and Warren Buffett.
How long till the apostasy trial is held? MDB (talk) 14:04, 2 January 2010 (UTC)
- Burn the witch!! Burn the witch!! Tetronian you're clueless 14:22, 2 January 2010 (UTC)
- That article appears to be about four years old. Perhaps the pitchfork wielding villagers got lost on the way to Castle
FrankenStein...--sloqɯʎs puɐ suƃısuɐɪɹɐssoʎ 16:51, 2 January 2010 (UTC)
- I will say it's pretty amazing the way he manages to adopt the essential liberal idea of taxation (equal burden on everyone) while acting like he discovered it. Next up: Ben Stein discovers gravity pulls things down.--Tom Moorefiat justitia 18:51, 2 January 2010 (UTC)
- To think the guy who wrote that article went on to "host" Expelled, and lose that cozy NYT gig for doing TV ads... ħuman 22:30, 2 January 2010 (UTC)
Dan Brown rehash[edit]
I just read his new one and all I can say is holy fuck, he went out of control with the pseudoscience this time. Tetronian you're clueless 16:44, 2 January 2010 (UTC)
- That's what I keep saying.--BobBring back the hat! 19:19, 2 January 2010 (UTC)
- Trust you didn't put money in the guy's pocket by buying the "book"? I am eating & honeychat 20:11, 2 January 2010 (UTC)
- He should do a Twilight book. Totnesmartin (talk) 20:33, 2 January 2010 (UTC)
- @Toast: I did buy the book. It wasn't a total loss because I was entertained by the suspenseful parts, but I didn't like the fact that he was peddling woo the entire time. Tetronian you're clueless 21:30, 2 January 2010 (UTC)
- He got nothing from me.--BobBring back the hat! 21:33, 2 January 2010 (UTC)
- Is peddling woo in the name of fiction a bad thing? Is Night of the Living Dead a bad film because there's no such thing as zombies? Dan Brown's work should stand or fall as fiction - he's no David Icke. He sees an intriguing idea and thinks "that'd make a good novel" - it's just a shame that said novel turns out not to be good. Totnesmartin (talk) 22:33, 2 January 2010 (UTC)
- As a novel it was good. But usually when the author gets too preachy the book sucks; that's exactly what happened. I liked the suspense and plot twists, but the POV-pushing was annoying. Tetronian you're clueless 05:04, 3 January 2010 (UTC)
- Through the novel, but especially right at the end when the story was over and the plot finished we are treated to a lotof "true" woo. I won't say who was in the conversation receiving the "straight dope" so as not so spoil a plot-line but it was simply a woo-fest.--BobBring back the hat! 07:48, 3 January 2010 (UTC)
- Agreed. And "Noetic Science"?? Surely he knows that it is utter bullshit. Tetronian you're clueless 14:47, 3 January 2010 (UTC)
- I find Dan Brown to be formulaic, but I definitely didn't read 'Deception point' or 'DaVinci Code' for it's scientific merit. Any fiction book, regardless of how well-versed the author is on his field of choice, should have the facts it presents scrutinized before use. Heinlein was an accomplished engineer, but I wouldn't use his books for engineering references. -- CodyH (talk) 16:00, 3 January 2010 (UTC)
- Ah, but at least Heinlein was using something resembling real science, and non unadulterated woo. (Too be fair, Brown might assert his science is just as real as Heinlein's, though. I've heard conflicting reports about just how much of his "science" Brown actually believes. MDB (talk) 20:41, 3 January 2010 (UTC)
- Yes, but Heinlein could construct a novel with a plot other than "holy shit, let's run around an ancient city and uncover secrets!1!!11!1 And there's a secret orginization that dosen't want us to!!2!1~!!1 ONOZ!1!1!1". --User:Theautocrat/Sig 22:13, 3 January 2010 (UTC)
Opportunity knocking[edit]
I see there's an article here on Voltaire. How about Rabelais, or Laurence Sterne (had him a book burned, he did) or the Scriblerians, those scalawags? You know you want to write some... Sprocket J Cogswell (talk) 18:09, 3 January 2010 (UTC)
- Put 'em on the To Do List; that's where I usually get my article suggestions when I feel like being useful.--Tom Moorefiat justitia 22:06, 3 January 2010 (UTC)
- That's where I put the ones I feel like other people doing :) Totnesmartin (talk) 09:39, 4 January 2010 (UTC)
Greatest Liberal Films of all time[edit]
Moved to Forum:Greatest_Liberal_Films--Tom Moorefiat justitia 22:24, 3 January 2010 (UTC)
A fun story[edit]
A couple weeks plus a year old, but still nice. [1] ħuman 23:34, 3 January 2010 (UTC)
PSU[edit]
We've been tidying up (new year's resolution bleurgh!) - among other things: 23 power supplies/chargers which have outlived the things they were supplying/charging. The connectors are all different sizes as are the voltages/amperages. What to do with 'em? I am eating & honeychat 22:00, 1 January 2010 (UTC)
- Donate them to the Goodwill (or whatever the equivalent is across the pond).--Tom Moorefiat justitia 22:10, 1 January 2010 (UTC)
- No one else will need them, either. Get them to someone who can recycle them for the copper & metal plates. ħuman 22:45, 1 January 2010 (UTC)
- I've just closed up my overseas "office" and I'm trying to cram everything into a tiny box room so have the same problem with redundant computer equipment (including PSUs). Our local council recycling unit has facilities where they will accept this stuff and extract the useful bits. However, it really is an indictment of the electrical industry that there are very few standards for DC power supplies regarding connectors, polarity or current. Even my brand-new mobile phone doesn't have the micro-USB connector which is supposed to be the coming standard. ГенгисRationalWiki GOLD member 11:03, 2 January 2010 (UTC)
- Another item that would nice to see industry-wide standards for are the rechargeable batteries on things like cordless drills. ħuman 22:39, 2 January 2010 (UTC)
- Standardizing wall-wart power supplies and rechargeable batteries are both wonderful ideas from the consumers' point of view, but diversity brings the freedom to innovate. If I carry the standardization bit to a certain logical conclusion, we might all still have no choice but to use hard wired telephone wall access points (can't really call them connectors) as big as the housing of some of the smaller phone chargers. Sprocket J Cogswell (talk) 00:01, 3 January 2010 (UTC)
- Well, I only call for it because both appear to be fairly "mature" technologies. It might even add some competition to the "spare battery" market. I have a phone from Sweden (the one in my red telephone picture) that has a plug you could run 30 amps through. Seriously. ħuman 00:43, 3 January 2010 (UTC)
- @ Sprocket: I fail to see how innovation is stifled by adoption of an industry standard. Rechargeable batteries are already standarised, open up a laptop battery and you find it comprised of standard cells. PCs largely run at 12V/5V/3.3V, consumer batteries (alkaline or rechargeable) are 1.5V or 9V. Most of these technologies are based on technology standards in the first place (land-line phone connectors were largely about monopolies preserving their control over national markets). The likes of Targus manage to produce power supplies which power a whole assortment of appliances just by changing the connector. Lily Inspirate me. 11:01, 3 January 2010 (UTC)
<--Agreed, my text-bite was hasty and badly aimed. It is well-known to be a Clever ThingTM to design electronic hardware using as many off-the-shelf parts as may be. Been there, done that, both analog and datacomm.fat yellow-wire Ethernet, anyone? I'm still a bit astonished at the punch a little switch-mode PSU can deliver, compared to the chunky iron/copper/rectifier/capacitor jobbies of the not so distant past. Mass applications of AC/DC converters would do well to be standardized, but some gadgets need something different than several hundred mA at 5V. Scooting a printer carriage back and forth takes more juice than charging an mp3 player or cell phone, and so on, so there has to be a variety of sizes to suit various applications. The ability to design consumer goods to run on low-energy juice, leaving the zapful stuff at the wall, with the accompanying regulatory hoops consolidated there, as well as the adaptation to different national mains standards, comes in really handy, but that strays from our current topic...
Sadly, industry has a bit too much resistance to re-using bits that were Not Invented Here, so on it goes. Bigger cordless power tools, with ever-increasing battery joltage, are meant to appeal to a certain mindset, and the market has shown that approach works. Wouldn't want to be seen hanging fixtures on the wall with a girly-looking drill, now, would you? I know guys who think like that. Bigger, fresher, more ostentatious works in some markets, while slim and sleek does the job in others. There I go with that dang diversity thing again, but with diversity comes robustness, and in a number of ways. Sprocket J Cogswell (talk) 15:06, 3 January 2010 (UTC)
P.S. @Human: be thankful for IEC-320 connectors and universal-alimentation power supplies. Now quit your kvetching and get back to work, you godless heathen. Sprocket J Cogswell (talk) 15:21, Sunday, 3 January 2010 (UTC)
- By the way, part of what got me
whiningthinking about this is last summer I was looking into buying a cordless caulking gun. Only one brand was available at the big box HI store. The tool was about fifty bucks, and came without a battery. Battery + charger was another fifty-sixty bucks, and of course that tool would be the only one the battery would work in (unless I bought more of that brand's tools...). I routinely keep a 14v cordless drill charged up. I am on my second cordless hedge trimmer (first went blunt, blade cost = tool cost...), and none of these toys can share batteries which is just sad. If they could all share batteries, I'd need at most 3 on hand, and one would always be freshly charged. I could also add to my tool arsenal without adding yet more chargers and batteries. By the way, the charger I use on my B&D drill battery is a "uni-volt" thing - no transformer anywhere, will charge any battery you can plug into it without overheating and killing them (the charger that came with the drill killed one of the two batteries it came with, the univolt one came with an older drill I picked up used long ago). Now of course, over time, the standards would likely change as tech improves and things can be smaller, but if there is only one or two legacy formats, they are more likely to be supported (I have a nifty "toy" in my desk drawer composed of all the adapters that came with new keyboards that extends almost from USB to DB9 serial). For batteries all that has to be "standard" is how they "attach" and where the contacts are, anything else could be up for grabs, even voltage, for most power tools.
- As far as the IEC cords, what's funny is that since all new toys come with their own new power cord, most of us now have a collection of 20-30 in a box. Potentially much more useful than the other box full of PSU warts, but there is, of course, no "demand" for them once one owns two spares.
- What really pisses me off is the extinction of the standard DIN dashboard hole for car radios. I know part of it is due to the radio being integrated with the GPS, HVAC, and coffee maker, but still. ħuman 23:47, 3 January 2010 (UTC)
- I think another issue (certainly in the UK) is that custom built-in units have no resale value and thus deter theft. Certainly during the 80's people would be buying knocked-off radios in pubs to replace the ones that had been stolen from their own vehicle. A radio version of musical chairs. Lily Inspirate me. 19:08, 4 January 2010 (UTC)
|
https://rationalwiki.org/wiki/RationalWiki:Saloon_bar/Archive46
|
CC-MAIN-2022-21
|
refinedweb
| 24,494
| 70.33
|
Each Answer to this Q is separated by one/two green lines.
I have the following list
bar = ['a','b','c','x','y','z']
What I want to do is to assign 1st, 4th and 5th values of
bar into
v1,v2,v3,
is there a more compact way to do than this:
v1, v2, v3 = [bar[0], bar[3], bar[4]]
Because in Perl you can do something like this:
my($v1, $v2, $v3) = @bar[0,3,4];
You can use
operator.itemgetter:
>>> from operator import itemgetter >>> bar = ['a','b','c','x','y','z'] >>> itemgetter(0, 3, 4)(bar) ('a', 'x', 'y')
So for your example you would do the following:
>>> v1, v2, v3 = itemgetter(0, 3, 4)(bar)
Assuming that your indices are neither dynamic nor too large, I’d go with
bar = ['a','b','c','x','y','z'] v1, _, _, v2, v3, _ = bar
Since you want compactness, you can do it something as follows:
indices = (0,3,4) v1, v2, v3 = [bar[i] for i in indices] >>> print v1,v2,v3 #or print(v1,v2,v3) for python 3.x a x y
In
numpy, you can index an array with another array that contains indices. This allows for very compact syntax, exactly as you want:
In [1]: import numpy as np In [2]: bar = np.array(['a','b','c','x','y','z']) In [3]: v1, v2, v3 = bar[[0, 3, 4]] In [4]: print v1, v2, v3 a x y
Using numpy is most probably overkill for your simple case. I just mention it for completeness, in case you need to do the same with large amounts of data.
Yet another method:
from itertools import compress bar = ['a','b','c','x','y','z'] v1, v2, v3 = compress(bar, (1, 0, 0, 1, 1, 0))
In addition, you can ignore length of the list and skip zeros at the end of selectors:
v1, v2, v3 = compress(bar, (1, 0, 0, 1, 1,))
|
https://techstalking.com/programming/python/compact-way-to-assign-values-by-slicing-list-in-python/
|
CC-MAIN-2022-40
|
refinedweb
| 327
| 62.35
|
* Steven Rostedt <rostedt@goodmis.org> wrote:> > > Index: linux-trace.git/include/linux/ring_buffer.h> > > +enum {> > > + RB_TYPE_PADDING, /* Left over page padding> > > > RB_ clashes with red-black tree namespace. (on the thought level)> > Yeah, Linus pointed this out with the rb_ static function names. But since > the functions are static I kept them as is. But here we have global names.> > Would RNGBF_ be OK, or do you have any other ideas?that's even worse i think :-/ And this isnt bikeshed-painting really, the RNGBF_ name hurts my eyes and RB_ is definitely confusing to read. (as the rbtree constants are in capitals as well and similarly named) RING_TYPE_PADDINGor: RINGBUF_TYPE_PADDINGyes, it's longer, but still, saner.> > too large, please uninline.> > I calculated this on x86_64 to add 78 bytes. Is that still too big?yes, way too big. Sometimes we make savings from a 10 bytes function already. (but it's always case dependent - if a function has a lot of parameters then uninlining can hurt)the only exception would be if there's normally only a single instantiation per tracer, and if it's in the absolute tracing hotpath. > > hm, should not be raw, at least initially. I am 95% sure we'll see > > lockups, we always did when we iterated ftrace's buffer > > implementation ;-)> > It was to prevent lockdep from checking the locks from inside. We had > issues with ftroce and lockdep in the past, because ftrace would trace > the internals of lockdep, and lockdep would then recurse back into > itself to trace. If lockdep itself can get away with not using > raw_spinlocks, then this will be OK to make back to spinlock.would be nice to make sure that ftrace's recursion checks work as intended - and the same goes for lockdep's recursion checks. Yes, we had problems in this area, and it would be nice to make sure it all works fine. (or fix it if it doesnt)> > > + for (head = 0; head < rb_head_size(cpu_buffer);> > > + head += ring_buffer_event_length(event)) {> > > + event = rb_page_index(cpu_buffer->head_page, head);> > > + BUG_ON(rb_null_event(event));> > > > ( optional:when there's a multi-line loop then i generally try to insert > > an extra newline when starting the body - to make sure the iterator > > and the body stands apart visually. Matter of taste. )> > Will fix, I have no preference.clarification: multi-line loop _condition_. It's pretty rare (this is such a case) but sometimes unavoidable - and then the newline helps visually.> > > + RB_TYPE_TIME_EXTENT, /* Extent the time delta> > > + * array[0] = time delta (28 .. 59)> > > + * size = 8 bytes> > > + */> > > > please use standard comment style:> > > > /*> > * Comment> > */> > Hmm, this is interesting. I kind of like this because it is not really a > standard comment. It is a comment about the definitions of the enum. I > believe if they are above:> > /*> * Comment> */> RB_ENUM_TYPE,> > It is not as readable. But if we do:no, it is not readable. My point was that you should do:> > RB_ENUM_TYPE, /*> * Comment> */> > The comment is not at the same line as the enum, which also looks > unpleasing.but you did:> RB_ENUM_TYPE, /* Comment> */So i suggested to fix it to: + RB_TYPE_TIME_EXTENT, /* + * Extent the time delta + * array[0] = time delta (28 .. 59) + * size = 8 bytes + */ok? I.e. "comment" should have the same visual properties as other comments.I fully agree with moving it next to the enum, i sometimes use that style too, it's a nice touch and more readable in this case than comment-ahead. (which we use for statements) Ingo
|
http://lkml.org/lkml/2008/9/27/191
|
CC-MAIN-2014-35
|
refinedweb
| 571
| 65.83
|
#include <stdint.h>
#include <stddef.h>
Go to the source code of this file.
Function pointer to a function to perform the transform.
The out and in arrays must be aligned to the maximum required by the CPU architecture unless the AV_TX_UNALIGNED flag was set in av_tx_init(). The stride must follow the constraints the transform type has specified.
Definition at line 102 of file tx.h.
Flags for av_tx_init()
Definition at line 107 of file tx.h.
Initialize a transform context with the given configuration (i)MDCTs with an odd length are currently not supported.
Definition at line 228 of file tx.c.
Referenced by config_input(), config_output(), convert_coeffs(), decode_init(), load_data(), and siren_init().
Frees a context and sets ctx to NULL, does nothing when ctx == NULL.
Definition at line 213 of file tx.c.
Referenced by av_tx_init(), common_uninit(), config_output(), decode_close(), load_data(), siren_close(), and uninit().
|
https://www.ffmpeg.org/doxygen/trunk/tx_8h.html
|
CC-MAIN-2021-43
|
refinedweb
| 143
| 69.28
|
The post Beautiful Language and Beautiful Code first appeared on Qvault.
“Dead Poet’s Society” is a classic film, and has become a recent favorite of mine. There’s a scene in particular that I enjoy, where Robin William’s character explains that it’s bad practice to use terms like “very tired” or “very sad”, instead we should use descriptive words like “exhausted” or “morose”!
I wholeheartedly agree with what’s being taught to the students in this scene. It’s tiresome to read a novel where the author drones on within the bounds of a lackluster vocabulary. This brings me to the point I wanted to emphasize in this short article:
Beautiful language and beautiful code are far from the same.
Beautiful language doesn’t simply communicate instructions from one human to another. Language well-used arouses emotions, illustrates scenery, exposes nuance, and can sing through rhyme and meter. Its purpose isn’t purely functional, it’s a rich medium of creative expression.
Beautiful code , at least by my standards, is purely functional. Its goal is to communicate exactly what it does. Emotion, motif, and narrative be damned. Beautiful code should be written so that machines carry out the instructions as efficiently as possible, and humans grok the instructions as easily as possible. The ideal piece of code is perfectly efficient and can be understood by any human that reads it.
Why shouldn’t code be more like its more expressive counterpart?
If you’re a part of /r/shittyprogramming on Reddit, you may have noticed several weeks back when the community became interested in writing the most ridiculous and inefficient way to calculate whether or not a given number is even. Here are some highlights.
const isEven = n => 'x'.repeat(n).replace(/xx/g, '') === '';` <small id="shcb-language-1"><span>Code language:</span> <span>JavaScript</span> <span>(</span><span>javascript</span><span>)</span></small>
function isEven(number) { if (0 == number) { return true; } else if (number < 0) { //I actually don't remember if JS has an absolute value function, return !isEven(number+1); // so this is how we handle negative numbers } else { return !isEven(number-1); } }
#include <stdio.h> #include <unistd.h> #include <stdlib.h> char isEvenFile() { while (access("/tmp/isEven", F_OK)) ; //We have to wait until the other process created the file FILE *comms = fopen("/tmp/isEven", "r"); int c = EOF; while (c == EOF) c = fgetc(comms); //In case we were so fast that the other process didn't write to the file for (;;) { int newC = fgetc(comms); if (newC != ' ') //the number has been sent c = newC; else { FILE *out = fopen("/tmp/out", "w+"); switch (c) { case '0': case '2': case '4': case '6': case '8': fprintf(out, "b"); break; default: fprintf(out, "a"); //printing a null char would be the end of the string. break; } fflush(out); break; } } fclose(comms); exit(0); } char isEven(int n) { char input[10]; sprintf(input, "%d", n); int pid = fork(); if (pid == -1) return 2; //error if (pid == 0) { isEvenFile(); } else { FILE *comms = fopen("/tmp/isEven", "w+"); fprintf(comms, "%d ", n); fflush(comms); //send the number to stdin of the child while (access("/tmp/out", F_OK | R_OK)) ; FILE *out = fopen("/tmp/out", "r"); int result = EOF; while (result == EOF) result = fgetc(out); //Again, we have to wait until the other process is done result = result == 'b'; fclose(comms); fclose(out); remove("/tmp/isEven"); remove("/tmp/out"); return (char) result; } }
One Redditor went so far as to apply machine learning to the problem and annotate an “isEven” training set.
My point with all this “isEven” nonsense is that code can be fun, interesting, and entertaining – I’m not trying to say it can’t be. I’m doing some mental gymnastics, however, in that I define all these “jokes through code” as normal language. It’s a medium through which humans express themselves creatively to one another, just like film, poetry, novels, or blogging.
None of the examples above are actually intended to run in production. If they were, they would be examples of ugly code indeed.
Ready to get coding?
Try our coding courses free
Join our Discord community
Have questions or feedback?
Follow and hit me up on Twitter @q_vault if you have any questions or comments. If I’ve made a mistake in the article be sure to let me know so I can get it corrected!
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/bootdotdev/beautiful-language-and-beautiful-code-2dhh
|
CC-MAIN-2022-33
|
refinedweb
| 731
| 61.06
|
Oh, and I'm explicitly forbidden to use concepts that haven't been learned in class yet (a friend suggested using custom functions, for example. We've only covered up to loops, scanf, printf and different data types. What you see is what's been more or less covered).
#include <stdio.h> void main() { int InputNumber = 0; // Create a variable to input a number. int i, j, k = 0; // Create variables for looping. int PreviousNumber = 0; // Create a variable that will act as numbers adding up to the input number. printf("Input number greater than two >> "); scanf("%d", InputNumber); /** The following 'for' loop is supposed to take every single number between two and the given and calculate to see whether it is a prime or not. **/ for(PreviousNumber = 2; PreviousNumber <= InputNumber; PreviousNumber++) { for(j = 2; j <= PreviousNumber; j++) // this loop is to bring every number between two and the divisor { if(PreviousNumber % j == 0 && j != PreviousNumber) { k++; } } if(k == 0) { printf("%d ", PreviousNumber); // This prints the number if it's a prime, be it the given or something before it. } } }
I hope the comments on the code explain things well enough. Much thanks in advance.
This post has been edited by kylera: 15 October 2008 - 12:18 AM
|
http://www.dreamincode.net/forums/topic/67644-using-loops-to-display-prime-numbers/
|
CC-MAIN-2016-22
|
refinedweb
| 208
| 63.49
|
Hello, This issue is similar to the issue comments in the GlusterFS list [Gluster-devel] Timeout settings and self-healing? (WAS: HA failover test unsuccessful (inaccessible mountpoint)) I have already probed the option transport-timeout 20 . The result are the same, the first ls of the other clients takes 30 seconds, others ls takes a variable time. Even times the file system is blocked during a large time 3 or 4 minutes. I have made the test with a simple schema with the same results. 2 servers (one export a brick for storage and a brick for namespace and the other server exports a brick for storage). At the client side is defined the unify translator (without replication). The test is to upload a large file (3.5 GB) to the GlusterFS and then unplugged the network cable of the client who uploads the file. The client (upload) log says: C [tcp.c:87:tcp_disconnect] brick1: connection disconnected E [unify.c:325:unify_lookup] unify: returning ESTALE for /(17314670376) [translator generation (0) inode generation (3)] E [fuse-bridge.c:459:fuse_entry_cbk] glusterfs-fuse: 111: (34) / => -1 (116) E [unify.c:325:unify_lookup] unify: returning ESTALE for /(17314670376) [translator generation (0) inode generation (3)] The client log (other client, ls) says: E [client-protocol.c:4520:client_getspec_cbk] trans: no proper reply from server, returning ENTCONN E [client-protocol-c:4809:client_protocol_cleanup] trnas torced unwinding frame type(2) op(4) address@hidden The server log says: E [protocol.c:271:gf_block_unserialize_tranposrt] server: EOF from peer (10.1.0.45:1023) C [tcp.c:87:tcp_disconnect] server: connection] The version of GlusterFS is 1.3.8pre5. The version of fuse is 2.7.2glfs9 The next test will be with and older version, p.e. glusterfs 1.3.7. do you think that this test is necessary ?? Thanks for the reply.> <> _____ De: address@hidden [mailto:address@hidden En nombre de Anand Avati Enviado el: martes, 15 de abril de 2008 11:32 Para: Antonio González CC: address@hidden Asunto: Re: [Gluster-devel] problem when client goes down Antonio, please use 'option transport-timeout 20' in (all of) your protocl/client volumes. avati 2008/4/14, Antonio González <address@hidden>: Sorry, i forgot to say that when I say "shut down the network" I want to say "unplugged" the cable of the client 1, the networks is operative for the other clients and servers. Thanks, -----Mensaje original----- De: address@hidden [mailto:gluster-devel-bounces+antonio.gonzalez <mailto:gluster-devel-bounces%2Bantonio.gonzalez>R's don't _______________________________________________ Gluster-devel mailing list address@hidden -- If I traveled to the end of the rainbow As Dame Fortune did intend, Murphy would be there to tell me The pot's at the other end.
|
http://lists.gnu.org/archive/html/gluster-devel/2008-04/msg00120.html
|
CC-MAIN-2017-30
|
refinedweb
| 457
| 56.55
|
Mini-Max Sum Hackerrank the inclusive range [1,10^9].
Output Format:- Print two space-separated long integers denoting the respective minimum and maximum values that can be calculated by summing exactly four of the five integers. (The output can be greater than the 32-bit integer.)
Sample Input:- 1 2 3 4 5
Sample Output:- 10 14
Explanation:- Our initial numbers are 1, 2, 3, 4, and 5. We can calculate the following sums using four of the five integers:
If we sum everything except 5, our sum is 1+2+3+4=10.
Same goes for 4, our sum is 1+2+3+5=11.
Same goes for 3, our sum is 1+2+4+5=12.
Same goes for 2, our sum is 1+3+4+5=13.
Same goes for 1, our sum is 2+3+4+5=14.
As you can see, the minimal sum is 10 and the maximal sum is 14. Thus, we print these minimal and maximal sums as two space-separated integers on a new line.
Hints: Beware of integer overflow! Use 64-bit Integer. Visit the problem at
How?
First, let’s make some observations:
We can minimize the sum by excluding the largest element from the sum.
We can maximize the sum by excluding the smallest element from the sum.
There are five integers, and the maximum value of each integer is 10^9. This means that worst-case scenario, our final sum could be a maximum of 4*10^9(which is outside of the bounds of an integer).
How to Solve:
To solve this, we read in five elements; for each element:
Add the element to a running sum of all elements, sum of five.
If the element is less than the current minimum, min, set the element as the new current minimum.
If the element is greater than the current maximum, max, set the element as the new current maximum.
After checking all five elements, we have the following information:
The sum of five elements, sum of five.
The value of the smallest element, min.
The value of the largest element, max.
C++ solution:-
#include <cmath> #include <cstdio> #include <vector> #include <iostream> #include <algorithm>//For using sort element using namespace std; int main() { /* Enter your code here. Read input from STDIN. Print output to STDOUT */ long long a[5]; long long sum = 0; for(int i = 0; i < 5; i++) { cin >> a[i]; sum += a[i]; } sort(a, a+5);//Sorting Elements cout << sum - a[4] << " " << sum - a[0] << endl; return 0; }
Simple solution sort the array a[n], for maximum, subtract a[0] with the sum, and for minimum value subtract it from the large value a[n].
C++ solution II
#include <bits/stdc++.h> typedef long long LL; using namespace std; int main(){ LL s[5]; LL d = 0; for(int i = 0; i < 5; i++){ cin >> s[i]; d += s[i]; } sort(s,s+5); cout << d-s[4] << " " << d-s[0] << endl; }
C solution
int main(){ unsigned long long int a[5],max,min,sum=0; int i; scanf("%lld",&a[0]); max=a[0];min=a[0];sum=a[0]+sum; for(i=1;i<5;i++){ scanf("%lld",&a[i]); if(max<a[i]) max=a[i]; if(min>a[i]) min=a[i]; sum=sum+a[i]; } printf("%lld %lld",sum-max,sum-min); return 0; }
Java solution
import java.util.*; public class Solution { public static void main(String[] args) { Scanner scan = new Scanner(System.in); long sum = 0; long min = 1000000000; long max = 0; while(scan.hasNext()) { long n = scan.nextLong(); sum = sum + n; if (min > n) { min = n; } if (max < n) { max = n; } } scan.close(); System.out.println((sum - max) + " " + (sum - min)); } }
Python solution
#!/bin/python import sys lst = map(int,raw_input().strip().split(' ')) x = sum(lst) print (x-(max(lst))), (x-(min(l
|
https://coderinme.com/mini-max-sum-hackerrank-problem-solution/
|
CC-MAIN-2019-13
|
refinedweb
| 647
| 65.12
|
From: David Abrahams (dave_at_[hidden])
Date: 2005-08-16 08:19:21
Jody Hagins <jody-boost-011304_at_[hidden]> writes:
> On Mon, 15 Aug 2005 23:21:29 -0600
> "Jonathan Turkanis" <technews_at_[hidden]> wrote:
>
>
>> > I think dropping support for some compilers would constitute a major
>> > upgrade, regardless of any new features, functionality, etc.
>>
>> Removing features should never constitute an upgrade.
>
> Sure it does. We see examples of addition through subtraction in many
> areas of life and engineering.
>
> Removing the complexity that surrounds support for many old compilers is
> an incredible upgrade, IMO.
IMO there is very little likelihood that officially dropping full
support for a compiler is going to happen on a Boost-wide basis, and
there's even less likelihood that it will be accompanied by a great
simplification in source code for any individual library. Most likely
it will be accompanied by the addition of features that couldn't be
made to work with the compiler. The only time I guess a library will
actually rip out code that supports a compiler is during a total or
near-total rewrite.
> Consider some of the major problems with ACE, and you will quickly see
> that many are due simply to the breadth of support for decreped
> compilers and operating systems.
?? My impression was that the major problems had to do with a lack of
stratification and modularization.
> Dropping compilers that did not support namespaces and other
> rudimentary "features" was a great "upgrade."
>
> While the breadth of support has helped boost gain wide acceptance, it
> is also the single biggest fault of the library as well.
?? Breadth of support has many benefits and only a few costs, and most
of those fall on the library maintainers. Library users (ahem, like
you) might pay for a slight reduction in velocity, but that's all.
-- Dave Abrahams Boost Consulting
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2005/08/91944.php
|
CC-MAIN-2019-43
|
refinedweb
| 328
| 54.22
|
When I use OpenCV 2.4,the image does not load!
In my prev problem, I found out that the image doesn't load in my project when I use OpenCV 2.4 so I couldn't create CSV file.
when I was using OpenCV 3.0.0 every thing was OK and I could load the image via imread
but now I'm using OpenCV 2.4 but the image doesn't load and always img.empty() command return true
here is my code but the img is null:
// faceRecognitionTest.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <iostream> #include <string> #include "opencv2\contrib\contrib.hpp" #include "opencv2\core\core.hpp" #include "opencv2\highgui\highgui.hpp" #include "opencv2\objdetect\objdetect.hpp" #include "opencv2\opencv.hpp" #include "opencv2\imgproc\imgproc.hpp" #include "FaceRec.h" //file handling #include <fstream> #include <sstream> using namespace std; using namespace cv; Mat img; int _tmain(int argc, _TCHAR* argv[]) { img = imread("f.jpg",CV_LOAD_IMAGE_GRAYSCALE); if (img.empty()) { return -1; } saveMatToCsv(img,"mycs.csv"); //fisherFaceTrainer(); return 0; }
Your code looks fine. So, the thing to check will be if you are linking with the right libraries. Also, are the two versions of OpenCV compiled with the same version of Visual Studio as that makes a difference in some cases (as I discovered painfully).
you fail here
if (b=img.empty())
b will always be assigned with the img.empty() value and this operation (the assign) is ok and you will always enter the if. So forget about b, why do you need it?
The last two comments are completely incorrect. b = img.empty() will assign whatever is returned to b, and then b will be evaluated by the if statement. So if img.empty() is false then b will be set to false and you will not enter the if. The if statement is perfectly fine as is.
oops, I have made a confusion with the numbers... sorry
ok.. warning here:
if (b=img.empty())is better ;) anyway isn't related to the issue. sorry
|
https://answers.opencv.org/question/73307/when-i-use-opencv-24the-image-does-not-load/
|
CC-MAIN-2020-10
|
refinedweb
| 344
| 71.1
|
PropCheck is a testing library, that provides a wrapper around PropEr, an Erlang
based property testing framework in the spirit of QuickCheck.
To use PropCheck with your project, add it as a dependency to
mix.exs:
defp deps do [ {:propcheck, "~> 1.4", only: [:test, :dev]} ] end
From PropCheck 1.3.0 onwards, we require at least Elixir 1.7 since in Elixir 1.11 function
get_stracktrace() is deprecated.
Relevant changes are documented in the Changelog, on GitHub follow this link.
PropCheck allows to define properties, which automatically executed via
ExUnit
when running
mix test. You find relevant information here:
propertymacro are found in
PropCheck.Properties,
PropCheck,
PropCheck.BasicTypes,
PropCheck.StateM.ModelDSLfor the new DSL (since PropCheck 1.1.0-rc1), which is a layer on top of
PropCheck.StateM.
PropCheck.TargetedPBT.
For PropCheck, there is only one configuration option. All counter examples found
are stored in a file, the name of which is configurable in
mix.exs as part of
the
project configuration:
def project() do [ # many other options propcheck: [counter_examples: "filename"] ] end
Note that the path can also be set as part of the application environment:
config :propcheck, counter_examples: "filename"
If both the project configuration and the application environment are present, the application environment is chosen.
Per default, the counter examples file is stored in the build directory (
_build),
independent from the build environment, in the file
propcheck.ctex. The file can
be removed using
mix propcheck.clean. Note that this task is only available if PropCheck
is also available in
:dev. Otherwise,
MIX_ENV=test mix propcheck.clean can be used.
Properties in PropCheck can be run in quiet or verbose mode. Usually, quiet is the default. To
enable verbose mode without requiring to touch the source code, the environment variable
PROPCHECK_VERBOSE
can be used. If this is set to 1, the
forall macro prints verbose output.
PropCheck can attempt to detect and output exceptions in non-verbose mode. This can be done using
the
detect_exceptions: true option for
property or
forall, or using the environment variable
PROPCHECK_DETECT_EXCEPTIONS. If this environment variable is set to 1, exception detection is enabled
globally.
The guides for PropEr are an essential set of documents to make full use of
PropCheck
Elixir versions of most of PropEr's tutorial material can be found in the test folder on GitHub.
Jesper Andersen and Robert Aloi blog about their thoughts and experience on using QuickCheck which are (mostly) directly transferable to PropCheck (with the notable exception of concurrency and the new state machine DSL from QuickCheck with the possibility to add requirement tags):
Rather new introductory resources are
In contrast to the book, the free website is concerned with Erlang only. The Erlang examples translate easily into Elixir (beside that at least a reading knowledge of Erlang is extremely helpful to survive in the BEAM ecosystem ...). Eventually I will port some of the examples to Elixir and PropCheck and certainly like to accept PRs.
PropCheck does not support PropEr's capability to derive automatically type generators from type specifications. This is due to some shortcomings in PropEr, where the specification analysis in certain situation attempt to parse the Erlang source code of the file with the type specification. Apparently, this fails if the types are specified in an Elixir source file.
Effectively this means, that to
to support this feature from Elixir, the type management system in PropEr needs
to be rewritten completely. Jesper Andersen argues in his aforementioned blog
post eloquently that automatically derived type generators are not needed, even
more that carefully crafted type generators for specific testing purposes is
an essential part of the QuickCheck philosophy. Therefore, this missing feature
is not that bad. For the same reason, automatic
@spec-checking is of limited
value in PropCheck since type generators for functions specification are also
generated automatically.
PropCheck has only very limited support for parallel testing since it introduces no new features for concurrency compared to PropEr.
Please use the GitHub issue tracker for
Before submitting a pull request, please use Credo to ensure code consistency
and run
mix tests to check PropCheck itself.
mix tests is a Mix alias that
runs regular tests (via
mix test) and some external tests (via another Mix
alias
mix test_ext) which test PropCheck's side effects, like storing
counterexamples or proper output format, that can't be easily tested using
regular tests.
Ensure that your PR is based on the latest
master version by rebasing. If your
feature branch is
my_feature, then do the following before publishing a
pull request:
git checkout master git pull --rebase git checkout my_feature git rebase master
If your PR implementation takes longer such that another PR is merged before your own PR, then you have to repeat the above sequence. It might be that you have fix some conflicts. But now you cannot push your branch again, because you changed the history of your local branch compared to the one published on GitHub. Therefore, you have force-push your branch. Do this with
git push --force-with-lease
A simple
git push --force is not allowed,
--force-with-lease is more friendly
and thus allowed. See details in the Git documentation.
The rationale behind this policy is that we want a simple almost linear history,
where each merged PR create a sequence of merge with no parallel work. This history
will not show how many active branches are available during development but the
sequence of incorporating changes to master. That is the important part and we
want to see which commit sequence lead to the specific feature. Merges destroy
this linearity. But beware, you can do nasty things with
git rebase, therefore
stick to the simple procedure explained above to not corrupt your repository.
PropCheck is provided under the GPL 3 License due to its intimate use of PropEr (which is licensed under GPL 3). In particular, PropEr's exclusion rule of open source software from copyleft applies here as well as described in this discussion on GitHub.
Personally, I would prefer to use the LPGL to differentiate between extending PropEr and PropCheck as GPL-licensed software and the use of PropEr/PropCheck, which would not be infected by GPL's copyleft. But as long as PropEr does not change its licensing, we have no options. PropCheck is clearly an extension of PropEr, so it is derived work falling under GPL's copyleft. Using LGPL or any other license for PropCheck will not help, since GPL's copyleft overrules other licenses or result in an invalid or at least questionable licensing which does not help anybody.
From my understanding of open source licenses as a legal amateur, the situation is currently as follows: Since PropCheck is a testing framework, the system under test is not infected by the copyleft of GPL, since PropCheck is only a tool used temporarily during development of the system under test. At least, if you don't distribute your system together with the test code and the test libs as a binary. Another friendly approach is to have the tests in a separate project, such that the tests are a real client of the system under test. But again, this is my personal take. In question, please consult a professional legal advisor.
|
https://awesomeopensource.com/project/alfert/propcheck
|
CC-MAIN-2022-27
|
refinedweb
| 1,208
| 53.21
|
I was expecting:
int main( int argc, char *argv[] ) { unsigned char m = 32; register unsigned mask = (1<<m); std::cout << std::hex << mask << '\n'; return 0; }
to print 0 but instead this program compiled with g++ (and VC++.NET2003 too) prints 1!
If I change (1<<m) by (1<<32) or if change the program for:
int main( int argc, char *argv[] ) { unsigned char m = 31; register unsigned mask = (1<<m)<<1; std::cout << std::hex << mask << '\n'; return 0; }
it gives me the expected 0.
In the C++ standard document, section 5.8. It is written
"The behavior is undefined if the right operand is negative, or greater than or equal to the length in bits of the promoted left operand."
The root for this behavior probably originates from C and the safe way to perform a bit shift when the number of bits to shift may exceed the length of the left operand is to implement the shift operation in a function:
Example, almost but not universally portable:
#include <climits> unsigned int safe_uint_shift(unsigned int value, unsigned int bits) { if( bits > (CHAR_BIT*std::sizeof(unsigned int) ) { return 0; } else { return value << bits; } }
Put it in a header and make it inline if you like.
This solution has been proposed by Jack Klein.
The other way that I have used is to promote the left operand to an integer type having enough bits:
register unsigned mask = (static_cast<unsigned long long>(1)<<m)<<1;
I want you to find in this blog informations about C++ programming that I had a hard time to find in the first place on the web.
|
http://blog.olivierlanglois.net/index.php/2008/04/12/shift_operator_undefined_behavior
|
CC-MAIN-2018-39
|
refinedweb
| 271
| 56.42
|
Datatable similar to .NET datagridview
Datatable similar to .NET datagridview
Hi all,
Unusual question. I hope someone can assist.
I am trying to decide how best to use Datatables (the Editor specifically) with inline editing. In VB.NET I can create a datagridview. I can add, remove and update all within that datagridview. Then, when I want to send the data to SQL, I can do it in bulk (using bulkcopy) or iterating through the records to inject the data row by row,into a SQL table.
I want to do something similar with Datatables and/or the Editor.
What I want to do but don't know if it is possible:
1) Start with empty Editor
2) Using an "Add" button, add a blank row (or have the editor come up with a new row (row 1) with empty cells
3) Using inline editing, the user modifies all the columns they need
4) When the user types in any of the initial fields on row 1, a new second row, is created, empty and waiting on input
5) Repeat until all data is entered for 1, 2, 10 or 20 rows
6) Upload all data from table to SQL via AJAX (this is the only time AJAX is called/used)
Is this something that can be done with the Editor?
Kind Regards,
Jason
This question has an accepted answers - jump to answer
Answers
Yes - that can be done. It will need some use of the API since it doesn't work that way out of the box, but what I would suggest is that you need a local editing table and call the
create()method to create a new row (submit immediately by calling
submit()so there is an empty row in the table). Then use inline editing as normal.
You can add another row to the table at any time by calling
create()and
submit()again.
Finally use
rows().data()to get the data from the table and you can then Ajax submit that to the server.
Allan
Hi Allan!
Thank you for the quick response. I will work with your suggestion and everyone know how it goes!
Regards,
Jason
Hi Allan,
Well I was able to get it working (more or less!). Currently, when I move from the dropdown column to another column, the value reverts back to default. All other columns work properly that do not have the dropdown.
Additionally, when I select "New" and fill out the form, the dropdown populates in the form, but I get the error :
Datatables warning: table id=example - Requested unknown parameter 'material.name' for row 3, column 3. For more information about this error, please see
Here is my JSON:
Here is my Editor javascript code:
I have been working on this since yesterday and cannot seem to get past this issue. I know I am just missing something simple, but my eyes are glazing over.
I have more! So I was able to cut down on my JSON to just the basics, I am now populating the dropdown and it is keeping the data when I move to another column, however it is not choosing the label, it is choosing the value when it does.
I'll continue working on this tonight.
There is no
matsdata point in your JSON data, so no value can be populated into it. It looks like you actually want to use
material.idbut you'd need to add the
id(or whatever that column is called) into your
materialobject. Its missing that bit of information at the moment.
Allan
If you don't mind, could you please clarify your previous message? I trimmed down the code a bit below and everything is working except it is displaying the value instead of the label when calling the submit on cell transition:
Code:
JSON
Hi again Allan,
I am not sure if this is the best way to do this. I created a dataSrc object (optlist) of the options list. In the render for the column I parsed that object and found a match for the column data. Then return the .label from the object.
The object creation for "whatever"
The column render
Don't you need to return something in the event that optlist.materials[i].value != data ?
Ah good point tangerine, I would just return (data) at that point.
The looks like a good workaround - nice one.
Allan
good afternoon can someone please pass me an example of a table on vb.net please thank you very much greetings
|
https://www.datatables.net/forums/discussion/55694/datatable-similar-to-net-datagridview
|
CC-MAIN-2020-16
|
refinedweb
| 757
| 71.44
|
5.0.0 [unreleased]
- #3391 Switch to selenium with chrome headless #3394 (bricesanchez)
- Fix page tree cache refresh on sub page changes #3390 (aiomaster)
- Fix image_title assignment with an auto_title different for each image #3388 (bricesanchez)
- Interference marketable pages active record #3387 (bricesanchez)
- flash.error does not exist, use flash.now[:error] instead #3386 (bricesanchez)
- Partial revert ae30517 : Remove layout def, we use render_with_templates? method #3384 (bricesanchez)
- Closes GH-3381 #3382 (mightymatth)
- Feature image crop #3380 (bricesanchez)
- Closes GH-3376 #3377 (kluucreations)
- Fix generation of second/subsequent resources in an extension #3372 (anitagraham)
- Update refinerycms-dragonfly gemspec for release. #3371 (parndt)
- Allow the home page to have a path other than '/' #3368 (anitagraham)
- Switch from Globalize to Mobility #3360 (bricesanchez)
Feature/rails 5.2 #3352 (bricesanchez)
-
4.0.2 [21 May 2018]
- Remove financials - pledgie has closed down #3365 (parndt)
- Bugfix #3351 Use namespacing in cancel_url #3362 (bricesanchez)
- QA guides Multiple and Relating Resources in an Extension #3361 (bricesanchez)
- Feature/admin page index cache #3359 (bricesanchez)
- Fix Reserved Word Documentation Typo #3358 (jcbantuelle)
- Feature/tests/locale #3356 (bricesanchez)
- Add Ruby 2.5.0 to CI and update others. #3354 (parndt)
- Generator specs clean up after themselves. #3353 (parndt)
- Bugfix #2915: assign cancel_url path in engine generator
_formpartial #3351 (bricesanchez)
- Update
Refinery::Pages::Type.partsto match the format introduced in 3535b906fa #3350 (joshmcarthur)
- Remove rails_12factor from Heroku gems #3349 (bricesanchez)
- Bugfix/3340/heroku deploy #3347 (bricesanchez)
- Fix checking :custom_slug existence to use column_exists? #3343 (yadex205)
- Only add puma when it's missing #3341 (parndt)
- no need call valid? #3339 (ShallmentMo)
- Bugfix factory page_with_page_part since Rails 5.1 #3338 (bricesanchez)
- Update readme.md #3337 (bricesanchez)
- Extract permitted params lists #3336 (matiasgarciaisaia)
- Fix CI error due to pg 1.0.0 #3335 (bricesanchez)
- Fix typo in readme #3333 (guich-wo)
- rails asset:precompile attempts to connect to DB because #3332 (emaraschio)
- Bugfix/3328 3329/preview button #3330 (bricesanchez)
- Rename FactoryGirl to FactoryBot. #3324 (parndt)
- Now use https protocol for links to refinerycms.com #3323 (bricesanchez)
- Updates URL's used in the docs for installation. #3322 (Designaroni)
- Use selected_image parameter to select an image upon opening the dialog #3284 (veger)
Fix permitted params allowing new images to be uploaded #3278 (veger)
-
-
4.0.0 [29 September 2017]
- Add Rails version to generated migration. #3313. Brice Sanchez
- Remove deprecated methods for versions < 4.0.0. 124e560. Brice Sanchez
Now supporting Rails 5.1. #3122. Brice Sanchez & Philip Arndt & Remco Meinen & Don Pinkster & Johan Bruning & Michael de Silva & Kiril Mitov and everyone else who helped out!
-
3.0.6 [2 October 2017]
- Fix search placeholder. #3291. Roman Krylov
- Upgrade dragonfly to version 1.1. #3303. Anita Graham
- Bugfix: canonical url now use current_frontend_locale instead of default_frontend_locale. #3299. Brice Sanchez
- Now we use redirect 301 to show pages. #3300. Brice Sanchez
- jQuery .load() deprecated. #3288. Roman Krylov
- Fixed variable name for editor options. #3287. Bo Frederiksen
- Fixed PhantomJS timeout when appending something to the body. #3297. Remco Meinen
- Change warn to use rails logger. #3272. Paul & Philip Arndt
- Update the updated_at field of the page itself when a page_part got updated. #3275. Maarten Bezemer
- Remove obsolete sitemap builder from generator #3270. Brice Sanchez
- Fix #3218 regression: don't duplicate locale in url. #3271. Brice Sanchez
Change data attribute to match Turbolinks 5 syntax. #3269. Maarten Bezemer
-
3.0.5 [23 November 2016]
- Fix HTML format for not found page in page seeds. #3263. Brice Sanchez
- Bugfix/draft page view not hidden for visitors. #3264. Brice Sanchez
- Remove deprecated heroku gem. Fix start generator for Authentication. #3258. Brice Sanchez
- Add ability to display current used template in layout. #3259. Brice Sanchez
- Add SVG logo. Improve site_bar HTML/CSS. #3262. Brice Sanchez
- Fix preview button with WYMeditor. #3020. Christoph Wagner
- Add "required" html attribute in add_whitelist_attributes config. #3236. Brice Sanchez
- Fixed admin menu items duplicating after extensions code changing. #3234. Dmitriy Molodtsov
Enable reselection of resource after removal. #3216. NicholasKirchner
-
3.0.4 [17 July 2016]
- Fixed sitemap generation for multiple frontend locales. #3218. Dmitriy Molodtsov
- Allow finders to be defined for each action. #3146. Philip Arndt
- Fix Gemfile additions when using --heroku. #3219. Philip Arndt
- Allow all data attributes through the HTML sanitizer. #3187 & #3217. nzgrover & Brice Sanchez
- Use admin_update_page_path to avoid encoded URLs. #3182. Philip Arndt
- Improve flash messages close action. #3168. Brice Sanchez
- Add skip_to_first_child_page_message. #3168. Brice Sanchez
- Add skip_to_first_child label in page admin index. #3168. Brice Sanchez
- Add redirected label in page admin index. #3168. Brice Sanchez
- Add warning when content has been sanitized. #3170. Brice Sanchez
Remove usage of deprecated page_title_with_translations. #3176. Brice Sanchez
-
3.0.3 [27 April 2016]
- Split and tidy up stylesheets. #3165. Marie-Helene Tremblay
- Add config for adding to the elements and attributes whitelisted for HTML sanitization. #3164. Anita Graham
Move javascripts partial in head tag. #3153. Brice Sanchez
-
3.0.2 [16 March 2016]
- Update respond_to? and method_missing API. 21e77b08563905b1a79d335cfa3f32278961f24b. Philip Arndt
- Add TranslatedFieldPresenter. #3125 & #3129. Brice Sanchez & Glenn Hoppe
- Specify the correct new_page_part_params. #3124. Philip Arndt
- Properly specify image for strong parameters. #3123. Philip Arndt
- Fixed multiple XSS vulnerabilities found by Shravan Kumar - Sanitize markup. #3117. Brice Sanchez
- Generate
authorsin order to create a valid gemspec. #3121. Brice Sanchez
- Remove deprecated rspec config
treat_symbols_as_metadata_keys_with_true_values. #3118. Brice Sanchez
- Update
factory_girl_railsfrom '~> 4.4.1' to
~> 4.6.0. #3114. Eric Guo
- Now welcome to Ruby 2.3.0. 94657b092cec2dde10f77c68205531defaf54476. Philip Arndt
- Fixed engine template seeds : Add page slug. #3106. Brice Sanchez
- Fixed a Dragonfly deprecation warning. ed7bbea503fc95e5aac5dcc94ce444a3d7c9718d. Philip Arndt
- Now use font-awesome from
font-awesome-sassgem. #3108. Oleg German
- Fixed CSRF vulnerability found by Shravan Kumar - Add protect_from_forgery with: :exception. #3101. Brice Sanchez
- Add line numbers to stack trace. #3093. Jared Beck
- Add icon to image picker warning. #3075. Anita Graham
Specify that we are expecting action_name to be insert. #3092. Philip Arndt
-
3.0.1 [26 January 2016]
- Set speakingurl-rails to 8.0.0, switch to poltergeist gem. #3084. Philip Arndt & Brice Sanchez
- Install a compatible version of refinerycms-acts-as-indexed #3079. Philip Arndt
Updated Refinery::Pages.default_parts= config to require both a title and a slug attribute for each part. #3069. Brice Sanchez
-
3.0.0 [19 September 2015]
- Deprecated ':format' syntax in Resources dragonfly_url_format. #2500. Josef Šimánek
- Removed Images trust_file_extensions config. #2500. Josef Šimánek
- Migrated to Dragonfly ~> 1.0.0. #2500. Josef Šimánek
- Removed Pages#cache_pages_backend. #2375. Philip Arndt
- Updated how
_make_sortableworks to take an options hash, requiring manual file changes. Philip Arndt
- Removed
attr_accessiblein favour of strong parameters. #2518. Philip Arndt
- Removed the Dashboard extension, now the left most tab is selected when logging in. #2727. Philip Arndt
- Removed special 'Refinery Translator' role, introduced in 0.9.22, from the core. #2730. Philip Arndt
- Enabled XHR paging by default, no configuration is needed. #2733. Philip Arndt
- Limited the jquery-ui assets loaded in the backend to the ones we use in the core. #2735. Philip Arndt
- Moved the tabs to the left hand side of the screen. #2734. Isaac Freeman & Philip Arndt & Brice Sanchez
- Add extra fields partial in Admin Pages form advanced options #2943. Brice Sanchez
- Remove required_label formhelper. #2954. Johan Bruning
- Added ability to create page part title different form slug. #2875. Brice Sanchez & Philip Arndt & Josef Šimánek
- Deprecated
part_with_titlemethod in
Refinery#Pageand
title_matches?method in
Refinery#PagePart. #2875. Brice Sanchez & Philip Arndt & Josef Šimánek
- Added ability to customize and translate filename. #2966. Brice Sanchez
- Added ability to translate images title and alt attributes. #2965. Brice Sanchez
- Decouple Refinery CMS from Devise. #2940. Philip Arndt
- Refinery CMS Core now requires Rails >= 4.2.3. #3034. Brice Sanchez
- Deprecated
selected_item_or_descendant_item_selected?method in
Refinery::Pages::MenuPresenter. #3038. Brice Sanchez
- Added plugin ordering set by config option. #3053. Graham Wagener & Philip Arndt
Update and Decouple Refinery CMS from forms generator engine. 42e253d05dc8b8a33beb1c6b25892d7646583573. CJBrew & Philip Arndt
-
2.1.4 [28 October 2014]
Changed Dragonfly to load before
:build_middleware_stackinstead of after
:load_config_initializers. #2721. Thilo-Alexander Ginkel
-
2.1.3 [5 September 2014]
Fixed an issue where the seeds in a generated form extension weren't idempotent. #2674. Philip Arndt
-
2.1.2 [14 March 2014]
- Fixed bug where
load_valid_templateswasn't called for create action and caused an exception when creating a new page and saving it with a blank title. #2517. Uģis Ozols
- Fixed bug where adding a link for a translated page via wymeditor it wasn't localised. Also updated the regex for
switch_localeto match hyphenated language code, e.g. zh-CN or pt-BR. #2533. Daniel Brooker
- See full list
2.1.1 [26 November 2013]
- Fixed menu reordering bug when
Refinery::Core.backend_routewas set to something different than
refinery. #2368. xyz
- Fixed bug in serializelist.js where we were iterating through object fields instead of the array elements. #2360. Uģis Ozols
- Bumped
selenium-webdrivergem dependency version to
~> 2.34.0.
- Fixed bug which occurred when trying to save a child page with no default translation. #2379. Jess Brown & Uģis Ozols
- Upgraded Globalize dependency to
~> 3.0.1. #2462. Chris Salzberg
- See full list
2.1.0 [5 August 2013]
- Require at least Ruby 1.9.3 and thus drop Ruby 1.8.x support. #2277 Uģis Ozols & Philip Arndt
- Removed
:before_javascript_libraries,
:after_javascript_libraries, and
:javascript_librariescontent blocks. #1842. Rob Yurkowski
- Refactored WYSIWYG fields into a partial. #1796. Rob Yurkowski
- Shortened all authentication helpers. #1719. Ryan Bigg
- Added canonical page id to body to allow CSS selectors to target specific pages instead of including special CSS files. #1700 & #1828. Philip Arndt & Graham Wagener
- Improved search functionality by adding cancel search link next to search input, made results_for entry nicer by adding some html to it. #1922. Marek
- Added search functionality to users extension. #1922. Marek
- Extracted locale picker code into separate partial. #1936. Marek
- Removed upgrade messages for IE. #1940. Philip Arndt
- Added template whitelist for page tabs. #1943. Johan
- Removed DD_belatedPNG since we dropped IE6 support a while ago. ()
- Dropped coffee-rails dependency. #1975. Uģis Ozols
- Added Portuguese translations. #2007. David Silva
- Added Hungarian translations. #2010. minktom
- Extracted search header logic into partial. #1974. Uģis Ozols
- Images can only be updated when the image being uploaded has the same filename as the original image. #1866. Philip Arndt & Uģis Ozols
- Rack::Cache should be a soft dependency per rails/rails#7838. Fixes Dragonfly caching when Rack::Cache is present. #1736. Alexander Wenzowski
- Made
refinerycms-i18nhard dependency for
refinerycms-core. This allowed to remove all
Refinery.i18n_enabled?checks. #2025. Uģis Ozols
- Fixed issue with
view_template_whitelistconfig option when array of symbols was used. #2030. A.S. Lomoff
- Removed
sourcefrom block_tags and made it so
wymeditor_whiltelist_tagsdon't get added to block_tags. #2029. Sergio Cambra
- Removed Array inheritance from
Refinery::Pluginsand included Enumerable module instead. #2035. Uģis Ozols
- Refactored
Refinery::Page#urland friends. #2031. Uģis Ozols
- Removed
store_current_location!because it was polluting all controllers with Refinery specific instance variable
@page. #2032. Philip Arndt & Amrit Ayalur
- Removed
meta_keywordssince seo_meta removed keyword support in version 1.4.0. #2052, #2053. Jean-Philippe Doyle & Uģis Ozols
- Changed WYMeditor.REL from
relto
data-rel. #2019. Amrit Ayalur
- Added config option to hide page title in page body. #2067. Andrew Hooker
- Added
Refinery::Core.backend_routeconfig which allows to set backend route to something different than
/refinery. #2050. Josef Šimánek
- Fixed issue with page part reordering for new pages. #2063. Uģis Ozols
- Fixed bug in regex which was wrapping
config.action_mailersettings in if clause. #2055. Uģis Ozols & Philip Arndt
- Renamed
force_ssl?to
force_ssl!and
refinery_user_required?to
require_refinery_users!and moved these methods to
Admin::BaseController. #2076. Philip Arndt
- Fixed issue with page tree not updating after page position update. #1985. Philip Arndt
- Replaced menu partials with
MenuPresenter. #2068, #2069. Philip Arndt
- Set
Refinery::Core.authenticity_token_on_frontendto
falseby default. Philip Arndt
- Refactored many internals of pages to centralize page cache expiration. #2083. Philip Arndt
- Fixed page saving bug when default locale was set to something different than
enor when it was changed after creating some pages. #2088. Philip Arndt
- Moved page preview functionality to its own controller and made it so that you need to be logged in to use it. #2089. Philip Arndt
- Fixed issue which allowed identical slugs to exist after page reordering. #2092. Philip Arndt
- Gave crudify's actions the ability to redirect to a particular page of results when
params[:page]is supplied to the action. #1861. Philip Arndt
- ActsAsIndexed is no longer a required dependency. Integration is achieved by refinerycms-acts-as-indexed instead. #2162. Philip Arndt
- Added Turkish translation 88f37f2a70c and c42a909eafa. Aslan Gultekin
- Allow user-defined geometries in
image#thumbnail_dimensions. #2214. Javier Saldana
- Added Ukrainian translation. #2259. Tima Maslyuchenko
- Fixed custom page view template preview. #2219. Jean-Philippe Doyle
- Fixed duplicate page part title validation. #2282. David Jones
- Fixed nil page bug when
marketable_urlswhere set to false and only
pathwas passed to
find_by_path_or_id. #2278. René Cienfuegos & Uģis Ozols
- Fixed bug where user plugin order was reset each time user was updated. #2281. Uģis Ozols
- Replaced Image#thumbnail geometry parameter with an options hash to support a strip option for reducing thumbnail file size. #2261. Graham Wagener
- Added ability to turn off page slug scoping. #2286. Matt Olson
- Made Crudify's
xhr_pagingoption working again. #2296. Chris Irish
- Added draft page support when displaying the home page. #2298. Philip Arndt
- Removed
Refinery::WINDOWSconstant. Philip Arndt
- Removed
jquery.cornerlibrary and invocations. #2328. Philip Arndt
- Removed
Refinery::Pages.view_template_whitelistand
Refinery::Pages.use_view_templatesconfiguration options and enabled setting per page view template to be active by default. #2331. Uģis Ozols
- Fixed markup corruption in WYMeditor when using
spanwith
styleattribute. #2350. wuboy
- Require jquery-rails ~> 2.3.0. Francois Harbec and Sergio Cambra
- Unlocked
truncate_htmlfrom 0.5.x as we no longer support Ruby 1.8.x. Uģis Ozols
- See full list
2.0.11 [unreleased]
- Fixed issue where a superfluous
</div>would be inserted when using
rails g refinery:enginefor WYSIWYG fields. #2236 Philip Arndt and Rob Yurkowski
- See full list
2.0.10 [15 March 2013]
- Blocked past insecure Rails versions. Philip Arndt
- Fixed problems with editing pages in different locales. Philip Arndt
- Locked
truncate_htmlto 0.5.x to ensure Ruby 1.8.x compatibility. Uģis Ozols
- See full list
2.0.9 [21 November 2012]
- Allowed extra parameters to be passed when creating image. #1914. tbuyle
- Added usage instructions to refinerycms executable. #1931. Uģis Ozols & Philip Arndt.
- Disabled page caching when logged in to prevent caching the sitebar. #1609. Johan Frolich
- Fixed problems with
refinery:enginegenerator and namespacing. #1888. David J. Brenes
- Fixed extension/form generator issue when using --pretend option. #1916. Uģis Ozols
- Fixed new resource insertion in custom extensions which use resource picker. #1948. Marek
- Fixed save and continue and preview functionality in pages extension when title was changed. #1944. tsemana
- Fixed html stripping bug when editing pages. #1891. Uģis Ozols
- Fixed pagination in existing image/resource partial after uploading new image/resource. #1970. Uģis Ozols
- Added check to extension generator which checks if extension specified by --extension option actually exist. #1967. Uģis Ozols
- Removed everything that was related to
Refinery::Page#invalidate_cached_urlsbecause it was redundant and there already is a code that takes care of deleting cache. #1998. Uģis Ozols
- See full list
2.0.8 [17 August 2012]
- Fixes installs broken by the release of jquery-rails 2.1.0 by requiring ~> 2.0.0. Rob Yurkowski
- See full list
2.0.7 [16 August 2012]
- Fixed a bug with nested reordering that would shuffle any set with 11 or more entities. #1882. Rob Yurkowski
2.0.6 [6 August 2012]
- Added Refinery::Page#canonical_slug to allow us to retrieve a consistent slug across multiple translations of a page. Useful for CSS selectors. #1457. Philip Arndt
- Fixed bug with 404 page not honoring custom view/layout template. #1746. Uģis Ozols
- Renamed all templates in generators which contained erb to *.rb.erb. #1750. Uģis Ozols
- Fixed page reorder issue on Ruby 1.8.x. #1585. Uģis Ozols & Philip Arndt.
- Allowed to override presenters using
rake refinery:override. #1790. Kevin Bullock.
- Fixed issue with saving settings in generated form extension by completely rewriting settings controller. #1817. Uģis Ozols
- Removed Refinery::Page#title_with_meta in favour of view helpers. #1847. Philip Arndt
2.0.5 [11 June 2012]
- Now extension/form generators will add all attributes to attr_accessible. #1613. Uģis Ozols
- Fixed a bug where
refinerycms-imageswas trying to load
refinerycms-resources. #1651. Philip Arndt
- Use new page part names (:body, :side_body) when generating extensions. Uģis Ozols
- Now extension generator will merge two seeds file in case user generates multiple resources for one extension. #1532. Uģis Ozols
- Fix refinery:override bug where it won't match js files with more than one extension. #1685. Uģis Ozols and Philip Arndt
- Now
refinerycms-imagesand
refinerycms-resourceswill inherit the s3_region configuration from
refinerycms-core. #1687. Julien Palmas
- Fixed dashboard bug where it wasn't producing proper links for nested pages. #1696. Philip Arndt
- Match only &dialog, ?dialog, &width, ?width, &height and ?height in dialog querystrings. #1397. Philip Arndt
- Added multiple language support (specified by
Refinery::I18n.frontend_locales) in
Refinery::Pageseeds file. #1694. Ole Reifschneider
- Added
Refinery::Page#canonicalsupport which allows multiple translations to have one canonical version. Philip Arndt
- Usernames are validated case insensitively to ensure true uniqueness. #1703. Philip Arndt
- Fixed bug with template selector for page where it would always default to parents template. #1710. Glenn Hoppe
- Fixed and added tests for post-authentication redirect bug where a user would always be redirected to the admin root after successful auth. #1561. Alexander Wenzowski
- Added session key check for unscoped
return_tovariable so that the key set by
Refinery::Admin::BaseController#store_location?is respected after successful auth. #1728. Alexander Wenzowski
- Fixed bug where flag icons in page listing couldn't be clicked due to expand/collapse event preventing it. #1741. Uģis Ozols
2.0.4 [14 May 2012]
- IMPORTANT: Fixed a security issue whereby the user could bypass some access restrictions in the backend. #1636. Rob Yurkowski and Uģis Ozols
- Fixed stack level too deep error in Refinery::Menu#inspect. #1551. Uģis Ozols
- Fixed spec fails for newly generated engines and bumped gem versions in generated Gemfile. #1553. Uģis Ozols
- Fixed dialog opening issue when Refinery was mounted at different path than /. #1555. Uģis Ozols
- Added ability to specify site name in I18n locales too. #1576. Philip Arndt
- If parent page has custom view/layout template specified set this template as selected when editing sub page. #1581. xyz
- Fixed page ambiguity for different pages with the same slug in find_by_path. #1586. Nicholas Schultz-Møller
- Added Refinery::Core.force_ssl config option. #1540. Philip Arndt
- Fixed bugs with page sweeper. #1615. f3ng3r
- Fixed image toggler show/hide bug. #1587. Gabriel Paladino & Uģis Ozols
- Fixed site bar caching bug when
cache_pages_fullis enabled and user is logged in. #1609. TheMaster
- Made sure plugin params are set before checking exclusion, and removed unused variable. #1602. Rob Yurkowski
- Fixed link addition bug in the backend when switching locale. #1583. Vít Krchov
- Fixed bug with invalidating cached urls for all frontend locales. #1479, #1534. Vít Krchov, Rob Yurkowski & Uģis Ozols
- Fixed image picker bug in Firefox 11 where content of the page was blank until you move the popup. #1637. Nelly Natalí
- Modified
Refinery.route_for_modelto fix a bug with the refinerycms-search plugin. #1661. Philip Arndt
- Fixed engine generator for when you don't have a title field. #1619. Jean-Philippe Boily
- Fixed
content_fu. #1628 Philip Arndt
- Added Russian translations for the preview button. Vasiliy Ermolovich
- Manually loaded translations associations to avoid N+1 queries in the pages backend. #1633. thedarkone
2.0.3 [2 April 2012]
- Fixed missing authentication initializer. Uģis Ozols
- Fixed Heroku and sqlite3 related errors. Philip Arndt
- Replaced label_tag with label in pages advanced options form. Johannes Edelstam
- Added missing
refinerycms-settingsrequire in generated refinery form extension. Philip Arndt
- Added JS runtime check in templates. Philip Arndt & Josef Šimánek
- Fixed user role assignment issue. Uģis Ozols
- Added image type whitelisting configuration option. Rob Yurkowski
- Removed global hash of menu instances. Pete Higgins
- Fixed save & continue issue. Philip Arndt
- Fixed issue with Heroku option for CMS generator. Philip Arndt
- Fixed asset compilation in production mode. Philip Arndt
- Add label style for admin Page tree Nic Aitch
- Fixed page part hiding Rob Yurkowski
- Fixed missing page part CSS classes (i.e.
no_side_body) Rob Yurkowski
- Deprecated
body_content_leftand
body_content_rightRob Yurkowski
- Reorganizes documentation Rob Yurkowski
2.0.2 [15 March 2012]
- Removed dependencies from refinerycms-testing that were just opinions and not necessary. Pete Higgins
- Fixed missing
Refinery::PagePartpositions in seeds. Mark Stuart
- Fixed issue with Rakefile template that gets generated into extensions. Uģis Ozols
- Fixed issue where new page parts could not be added to a page. Uģis Ozols
- Added missing initializer for the Authentication extension. Uģis Ozols
- See full list
2.0.1 [6 March 2012]
- Updated
plugin.urlcode to support Rails 3.2.2. Philip Arndt
- Added guard-spork '0.5.2' dependency to refinerycms-testing. Joe Sak
- Added support for '.' in usernames. Philip Arndt
- Now includes application.js by default. Nick Romanowski
- See full list
2.0.0 [29 February 2012]
- Remove jquery_include_tags helper in favor of using jquery from jquery-rails gem. Uģis Ozols
- Finally removed
Page#[]in favour of
Page#content_forso instead of
@page[:body]it's
@page.content_for(:body). Philip Arndt
- Moved everything under Refinery namespace. wakeless
- Renamed
RefinerySettingto
Refinery::Setting. Philip Arndt
- Added
rails g refinery:formgenerator for form textensions. Philip Arndt
- Moved
/shared/*to
/refinery/*instead, including
/shared/admin/*to
/refinery/admin/*as it makes more sense. Philip Arndt
vendor/enginesis now
vendor/extensions. Philip Arndt
- Extensions are now generated with testing support built in via a dummy refinery installation. Jamie Winsor
- Refinery is now mountable at a custom path. Uģis Ozols
- See full list if you dare.
- See explanation of changes.
1.0.9 [5 November 2011]
guardtesting strategy ported from edge for testing refinery from its own directory without a dummy app. Jamie Winsor & Joe Sak
- WYMEditor bug fixes Nic Haynes
- Bulgarian translations added. Miroslav Rachev
- Fixed --heroku command. Garrett Heinlen
- Refactored plugins code to add Ruby 1.9.3 support. Amanda Wagener
- See full list
1.0.8 [1 September 2011]
refinerycms-corenow depends on rails so that users of 1.0.x can be confident of the entire stack being present as before. Philip Arndt
- No longer requiring autotest as a dependency of
refinerycms-testing. Philip Arndt
- Improved 'wrong rails version' error message on install with a more helpful guide on how to specify a rails version. Philip Arndt
- See full list
1.0.7 [31 August 2011]
- No change, just fixing corruption in the 1.0.6 gem caused by Syck. Philip Arndt
- See full list
1.0.6 [31 August 2011]
- Added support for Devise
~> 1.4.3. Philip Arndt
- Removed dependency on Rails but added dependencies to its components, like activerecord, where they are used. Philip Arndt
- See full list
1.0.5 [31 August 2011]
- jQuery UI updated to
1.8.15from
1.8.9. Uģis Ozols
- Removed Duostack hosting option from the installer because the platform isn't online anymore. Philip Arndt
- Fixed non raw output into noscript section of the backend. Philip Arndt
will_paginateupdated to
~> 3.0.0now that it has gone final. Uģis Ozols
- See full list
1.0.4 [11 August 2011]
- Added support for figuring out dimensions in resized images to
image_fu. Philip Arndt and Joe Sak
- Fixed issues installing Refinery due to lack of permissions to the gem directories. Philip Arndt
- Added ability to specify a different database host in the
bin/refinerycmsinstaller. Philip Arndt
- Lock
will_paginateto
3.0.pre2in core gemspec. Kris Forbes and Uģis Ozols
- Patch required_label helper so it would pick up I18n model attribute translations. Uģis Ozols
- See full list
1.0.3 [23 June 2011]
- Fixes corruption in the 1.0.2 gem. Philip Arndt
- See full list
1.0.2 [23 June 2011]
- Ensure that
refinerycms-testingis not enabled by default when installing an application. Philip Arndt
- See full list
1.0.1 [21 June 2011]
- Added
-t/
--testingoption to
bin/refinerycmswhich adds
refinerycms-testingsupport by default when installing. Philip Arndt
- Set rails dependency to
~> 3.0.9. Philip Arndt
- Re-enabled the magic
s3_backendsetting controlled by
ENVvariables. Philip Arndt
bin/refinerycmsinstaller now generates rails using
bundle execso that you can have multiple Rails versions installed and they won't clash. Philip Arndt
- Fixed problems with
rcovand
simplecovin Ruby 1.9.2. Joe Sak
- Make the catch-all pages route for marketable URLs be controlled by the configuration switch. Kyle Wilkinson
- See full list
1.0.0 [28 May 2011]
- New
::Refinery::MenuAPI implemented which speeds up menu generation by many times. Philip Arndt
- Removed caching from menu because it's so much faster now. Probably in future it will be added to
::Refinery::Menuitself in a transparent manner. Philip Arndt
- Deprecated
Page#[]in favour of
Page#content_fore.g. instead of
@page[:body]use
@page.content_for(:body). Philip Arndt
- Noisily deprecated many other features that still function in 1.0.0 but won't be present in 1.1.0. Philip Arndt
- A hidden page can no longer mark the ancestor pages as selected in the menu. Philip Arndt
- Rcov added to
refinerycms-testinggem. Rodrigo Dominguez
- See full list
0.9.9.22 [22 May 2011]
- Fixed issue introduced with
rake 0.9.0. Philip Arndt
- Improved menu performance again including update to
awesome_nested_set 2.0. Philip Arndt and Mark Haylock
- Supporting the new Google Analytics 'site speed' feature. David Jones
- Implemented
:translatorrole which allows a particular user access only to translate pages. Philip Arndt
- Added support for
Dragonfly 0.9.0which uses the 'fog' gem. Jesper Hvirring Henriksen
- Updated all
refinery/admin.jsfunctions to make use of 'initialised'. Mark Haylock
- Using SEO form from
seo_metainside pages' advanced options rather than having it duplicated in the Refinery CMS codebase too. Uģis Ozols
- See full list
0.9.9.21 [03 May 2011]
- Fixed issue with MySQL2 gem complaining about us being on Rails 3 by specifying
'~> 0.2.7'in the Gemfile of a generated application. Philip Arndt
/registrationsis now
/users. Philip Arndt
- Added Finnish translation. Veeti Paananen
- Allowed
dataand
data-attributes in WYMeditor tags using HTML view. Philip Arndt
- See full list
0.9.9.20 [28 April 2011]
- Improved performance of the menu rendering. Philip Arndt
- Fixed UI to allow for how different languages display on the login screen. Marian André
- Vastly improved specs & spec coverage. Uģis Ozols
- Upgraded to
jQuery 1.5.2and
Dragonfly 0.8.4. Philip Arndt
- See full list
0.9.9.19 [22 April 2011]
- Removed
rdocdependency. Philip Arndt
- Migrate to stable Rails 3.0.7. Josef Šimánek
- Use
let()in rspec specs. Uģis Ozols
- See full list
0.9.9.18 [16 April 2011]
- Fixed a backward incompatibility. Josef Šimánek
- Reduced calls to
SHOW TABLESby updating
friendly_id_globalize3. Philip Arndt
- Switched
/shared/_menu.html.erband
/shared/_menu_branch.html.erbaway from
render :partialwith
:collection, speeding up menu 12~15%. Philip Arndt
- Fixed Refinery.root, Fixed generator templates, Added refinerycms-i18n generator to refinerycms generator if i18n is included. Mark Haylock
- Bumped Rails dependency to
~> 3.0.7.rc2. Philip Arndt
- See full list
0.9.9.17 [15 April 2011]
- Mass assignment protection implemented. Andreas König
- Removed deprecated code to prepare for
1.0.0. Uģis Ozols
- Added
Strip Non Asciipreference to
has_friendly_id. Marc Argent
- Bumped Rails dependency to
~> 3.0.7.rc1. Philip Arndt
- Better support for models in modules for uncrudify. Josef Šimánek
- See full list
0.9.9.16 [7 April 2011]
- Improved resource picker. Will Marshall
- Improved robustness of
Page#expire_page_cachingfor both
ActiveSupport::Cache::FileStoreand
ActiveSupport::Cache::MemoryStore. Jeff Hall
- Optimised index sizes on MySQL. Ruslan Doroshenko
- Changed default cache store to
:memory_store. Philip Arndt
rake db:migrateand
rake db:rollbacknow works consistently when migrations from other engines are in the mix. Vaughn Draughon
- Re-enable cache when logged in, this avoids slowdown of site when admin logged in. Mark Haylock
- See full list
0.9.9.15 [1 April 2011]
- Fixed asset caching of files in
public/stylesheets/. Sergio Cambra
- All dependencies now have an absolute version dependency (e.g. '= 0.9.9.15' rather than '~> 0.9.9.15') to prevent Refinery auto-updating. Philip Arndt
- See full list
0.9.9.14 [31 March 2011]
- Added
refinery.before_inclusionfor running extra functionality just before Refinery attaches to Rails. Philip Arndt
- Renamed
refinery.after_inclusionto
refinery.after_inclusionto match
refinery.before_inclusion. Philip Arndt
- Moved meta tag responsibility to
seo_metalibrary. Philip Arndt
- Added HTML5 tag support to WYMeditor. Philip Arndt and Nick Hammond
- See full list
0.9.9.13 [28 March 2011]
- Forcing password reset when migrating from older versions of Devise (sigh). Philip Arndt
- Updated to
refinerycms-i18n 0.9.9.16- please run
rails generate refinerycms_i18n. Philip Arndt
- See full list
0.9.9.12 [27 March 2011]
- Removed
password_saltfield from users table and comment out
config.encryptorin
config/initializers/devise.rbto handle update to devise 1.2.0. Uģis Ozols
- See full list
0.9.9.11 [24 March 2011]
- Translated WYMeditor texts to Japanese. Hiro Asari
- Supporting
cucumber-rails 0.4.0. Philip Arndt
- Added an option to link in the
page_titleenabling easier breadcrumbs. Sergio Cambra
- Fixed support for
asset_file_pathin upcoming Rails 3.1. Philip Arndt
- Updated copyright notice to include the current year. David Jones
- Fixed site bar switch link. Philip Arndt
- Added support for translating Javascript strings. Philip Arndt
- Added
refinery.on_attachfor running extra functionality just after Refinery attaches to Rails. Functions similarly to
config.to_prepare. Philip Arndt
- See full list
0.9.9.10 [17 March 2011]
- Excluded caching option for menus when logged in. Philip Arndt
- Fixed site bar translation logic. Philip Arndt
- Removed
config/settings.rbfile. Philip Arndt
- Added a default
features/support/paths.rbfile in the
Rails.rootfor your paths. Philip Arndt
- See full list
0.9.9.9 [15 March 2011]
- Added Japanese translation. Hiro Asari
- Improved menu rendering performance. Philip Arndt
- Added caching to site menu and pages backend (DISABLED by default). Philip Arndt
- Added
Page#by_titleto filter pages results by title using
Page::Translation. Philip Arndt
- Added migration to remove already translated fields from the pages table. Philip Arndt
- See full list
0.9.9.8 [11 March 2011]
- Fixed several user interface bugs reported by Patrick Morrow. Philip Arndt
- Looser dependency on
moretea-awesome_nested_set(now
~> 1.4). Philip Arndt
- Corrected
ajax-loader.gifpath. Maurizio
- See full list
0.9.9.7 [10 March 2011]
- Added
:per_pageoption to
crudifyfor overriding the number of items to display per page with will_paginate. Josef Šimánek
- Deprecated
rake refinery:updatein favour of rails
generate refinerycms --update. Philip Arndt
- Added
--skip-dboption to
bin/refinerycmsinstaller which doesn't automate any database creation/migration and skips the
rails generate refinerycmsgenerator. Philip Arndt
- Exchanged (help) links for the information.png 'refinery icon'. This will happen automatically if you used
refinery_help_tag. Philip Arndt
- Added
xhr_pagingas an option in crudify which handles the server-side usage of the HTML5 History API. Philip Arndt
- Looser Bundler dependency (now
~> 1.0). Terence Lee
- See full list
0.9.9.6 [7 March 2011]
- Fixed an issue that caused the installer to fail on some systems. Philip Arndt
- See full list
0.9.9.5 [7 March 2011]
- Added
<div class='inner'>to
_content_pagefor better control over CSS for each section. Please see 086abfcae2c83330346e28d1e40004cff8a27720 for what changed if this affects you. Stefan Mielke
- Menu performance improvements. David Reese
- Removed
--updatefrom
bin/refinerycmsbecause it's no longer relevant. Philip Arndt
- Added support for --ident in the installation task which uses ident authentication at the database level by commenting out the username and password credentials. Philip Arndt
- Changed the default
cache_storeto
:file_storefor better thread safety with passenger. Philip Arndt
- WYMeditor Internet Explorer improvements. Philip Arndt
- See full list
0.9.9.4 [24 February 2011]
- Added
doc/guidesfor textile based guides that power the guides at refinerycms.com/guides. Steven Heidel and Philip Arndt
- Allowed multiple resource pickers on one form. Phil Spitler
- Solved YAML parsing issues introduced by change to Psych. Aaron Patterson and Uģis Ozols
- Updated page to use a localized cache key if frontend translations are enabled. Bryan Mahoney
- Upgraded modernizr to version 1.7. Jon Roberts
- Fixed an issue with the 'add page parts' functionality inserting new parts in the wrong place. Philip Arndt
- See full list
0.9.9.3 [17 February 2011]
- Fixed faulty require statement that tried to load rack/cache before dragonfly. Philip Arndt
- See full list
0.9.9.2 [17 February 2011]
- Removed
activesupportrequirement from
bin/refinerycms. Philip Arndt
- Fixed an issue in some browsers with a particular jQuery selector. Philip Arndt
- Modified some existing migrations to behave better when creating new applications. Philip Arndt
- Fixed
-uand
-psupport for
bin/refinerycms. Philip Arndt
- See full list
0.9.9.1 [15 February 2011]
- Fixed Firefox issue with WYMeditor. Amanda Wagener
- Gracefully exit
bin/refinerycmson error. Alexandre Girard and Brian Stevens and Philip Arndt
- Added basic single table inheritance support to crudify. Ken Nordquist
- Removed most of the 0.9.8.9 specific
--updatelogic in
bin/refinerycms. Philip Arndt
- Added
refinerycms-testingengine which reduces the main Gemfile complexity. Philip Arndt
- Split the project into 10 separately released gems that include their own dependencies. Philip Arndt
- New Vietnamese translation files added. Alex Nguyen and Stefan N and Mario Nguyen
- Improved JRuby support as well as the way that commands run in any ruby implementation. Hiro Asari
- See full list
0.9.9 [27 January 2011]
- Better, more semantic HTML5. Joe Sak
- Added
roleselection for
admin/users#edit. Hez Ronningen
- Fixed WYMeditor bug regarding adding links, helped with persistent testing by Marko Hriberšek. Philip Arndt
- Better
RSpeccoverage Joe Sak and Philip Arndt and Uģis Ozols and PeRo ICT Solutions
- Superusers now get access to all backend tabs by default. Philip Arndt
- Introduced LOLcat translation (yes, seriously) as an easter egg. Steven Heidel
- Fixed several missing translations. Johan Bruning
- Solved several i18n inconsistencies. Jonas Hartmann
- Made
UserPlugindependent on
Userwhich solves a data redundancy proble.m Maarten Hoogendoorn
- Fixed issue with finding where engines are located on the disk using
Plugin::pathname. Lele
- Add
rescue_not_foundoption to turn on/off 404 rendering. Ryan Bigg
- Full review of the French translations. Jérémie Horhant
- Now using
mail()to send emails. J. Edward Dewyea
- Refactored backend HTML & CSS, reduced complexity and added a loading animation when you click Save on forms. Philip Arndt
- Improved the speed of the menu especially related to scaling through reusing collections rather then revisiting the database. Amanda Wagener
- Implemented an API for the
pagesform's tabs. David Jones
- Use the rails naming convention for translations that contain html markup. Escaping translations not marked as
html_safein the
refinery_help_taghelper. Jérémie Horhant
- Full review of the Italian translations. Mirco Veltri
- Deprecated
/adminin favour of
/refineryand put in a message to display to the user when they use it. Philip Arndt
- Full review of the Russian translations as well as work with articles / genders in grammar. Semyon Perepelitsa
- Full review of routes and the Latvian translations. Uģis Ozols
- Implemented better support for
rails.js, using standard
:methodand
:confirm
link_tooptions. Semyon Perepelitsa
- Locked jQuery to 1.4.2 and jQuery UI to 1.8.5, fixed errors with dialogues and tested. Philip Arndt and Phillip Miller and Sam Beam
- Added multiple file upload for images and resources using HTML5. Philip Arndt
- Deprecated
content_for :headin favour of
content_for :meta,
content_for :stylesheetsand
content_for :javascripts. Philip Arndt
- Improved client-side responsiveness of backend and frontend. Philip Arndt
- No more RMagick dependency Philip Arndt
- Added
rake refinery:override stylesheet=somefileand
rake refinery:override javascript=somefilecommands to override stylesheets and javascripts. Oliver Ponder
- Restructed the project to remove
vendor/refinerycmsand put all engines in the application root. Kamil K. Lemański
- Force no resource caching on non-writable file systems (like Heroku). Philip Arndt
- Refinery can now attach itself to a Rails application simply by including the refinerycms gem in the
Gemfile. Philip Arndt
- Added core support for
globalize3so that pages can be translated into multiple languages. Philip Arndt and Maarten Hoogendoorn
- Refactored
group_by_dateinto a helper method which is called in the view layer and not in the controller because it is entirely presentation. Philip Arndt
- Applied HTML5 history pagination to all core engines. Philip Arndt
- Converted translate calls to use
:scope. Uģis Ozols
- Fixed issues where errors would only show up in English for some models and updated Russian translations. Semyon Perepelitsa
- Converted to devise for authentication, requiring password resets. Philip Arndt and Uģis Ozols
- Sped up WYMeditor load times. Philip Arndt
- Fixed several issues for Internet Explorer. Josef Šimánek
- Added installation option for Duostack hosting service. Philip Arndt and David E. Chen
- See full list
0.9.8.9 [21 December 2010]
- Fixed error in the inquiries engine seeds. Philip Arndt
- Separate each error message into its own
<li>. Uģis Ozols
- Add
rescue_not_foundoption to turn on/off 404 rendering. Ryan Bigg
- Add
:fromkey to
UserMailerfor password reset. Earle Clubb
- See full list
0.9.8.8 [16 December 2010]
- Prevented ::Refinery::Setting from accessing its database table before it is created. Philip Arndt
- Added more options to
bin/refinerycmslike ability to specify database username and password. Philip Arndt
- See full list
0.9.8.7 [15 December 2010]
- Fixed a problem with migration number clashes. Philip Arndt
- Fixed problems with
db:migratefor a new app on Postgres. Jacob Buys
- Back-ported the changes made to the images dialogue which speed it up significantly. Philip Arndt
- Sort file names in the
refinery_enginegenerator so attribute types don't get changed before
_form.html.erbgeneration. Phil Spitler
- Added
approximate_asciisetting, defaulted to true, for pages so that characters won't appear strangely in the address bar of some web browsers. Uģis Ozols
- See full list
0.9.8.6 [3 December 2010]
- Backported lots of functionality from 0.9.9 and later like:
- Fixed reordering for trees and non-trees Philip Arndt
- Better
RSpeccoverage Joe Sak and Philip Arndt and Uģis Ozols and PeRo ICT Solutions
- Fixed issue with finding where engines are located on the disk using
Plugin::pathname. Lele
- Improved the speed of the menu especially related to scaling through reusing collections rather then revisiting the database. Amanda Wagener
- No more RMagick dependency Philip Arndt
- Added helper methods to expose some of the options in crud. David Jones
- See full list
0.9.8.5 21 September 2010
- Fixed an issue with the engine generator that was putting a comma in the wrong place breaking the call to
crudify. Maarten Hoogendoorn
- Made the delete messages consistent. Uģis Ozols
zh-CNwas overriding en locale in core locale file, fixed. Philip Arndt
- Changed verbiage from created to added, create to add as it describes it better for things like images. Philip Arndt
image_funo longer gives you the width and height of the image due to performance problems. Philip Arndt and David Jones
- Implemented a standardised API for the engine generator. The core now includes a standard engine install generator. Engines generate a readme file explaining how to build an engine as a gem. David Jones
- See full list
0.9.8.4 [17 September 2010]
- Recursive deletion of page parts. primerano
- Move around the default pages. Philip Arndt
- Extraction of windows check to
Refinery::WINDOWS. Steven Heidel
- Updated the changelog for several previous releases. Steven Heidel
- Made the menu more flexible so that it can be used in many places in your layout without caching over the top of itself. Philip Arndt
- Added search feature to Refinery Settings. Matt McMahand
- Ensure that in
crudifythat we use
:per_pageproperly for
will_paginate. Philip Arndt
- Reduce the number of routes that we respond to in the
pagesengine as they were unused. Philip Arndt
- Fixed a case where page links weren't generating properly when inside an engine such as the news engine which made use of
params[:id]. Took a lot of perserverance on the part of Hez - thank you very much Hez! Hez Ronningen and Philip Arndt
- See full list
0.9.8.3 [14 September 2010]
- German translation improvements. Andre Lohan
- Fix bug with
bin/refinerycmsand windows commands. Philip Arndt
- DRY up
crudifyand also switch to ARel. Philip Arndt
- Several fixes to make things much easier on windows. Philip Arndt
- See full list
0.9.8.2 [13 September 2010]
- Update
readme.mdDavid Jones
- Speed improvements to menu with nested_set. Maarten Hoogendoorn
- More speed improvements by optimising slugs. Philip Arndt
- Fix
-hflag on
bin/refinerycmsto display the help. Steven Heidel
- See full list
0.9.8.1 [9 September 2010]
- Convert to
awesome_nested_set. Maarten Hoogendoorn and Philip Arndt
- Allow passing
-gto the bin task for extra gems. Tomás Senart
- Update documentation for engines, not plugins. David Jones
- Several more documentation fixes. Steven Heidel
- Better use of dragonfly resizing. Philip Arndt
- Partial Latvian translation. Uģis Ozols
- Review Portugese translation. Kivanio Barbosa
- Bugfix with wymeditor in the engine generator. Karmen Blake
- Split
application_helperinto smaller, more usable files. Philip Arndt
- Move features and specs to each engine directory. Philip Arndt
- Bugfixes to ensure that reordering works under
awesome_nested_set. Maarten Hoogendoorn and Philip Arndt
- Update engines to not have a special :require in the Gemfile. Johan Bruning
- Make cache sweepers work. Philip Arndt
- See full list
0.9.8 [30 August 2010]
- Rails 3 support!
- See our blog post
- See full list
0.9.7.13 [23 August 2010]
- Russian language support (RU). Sun
- We <3 HTML5 (better supported HTML5 semantics) Joe Sak and Philip Arndt
- Fixed issue with Refinery's 404 page. Philip Arndt
- Fixed recent inquiries display on dashboard when HTML present. Steven Heidel
- Better dutch (NL) translations. Michael van Rooijen
- Fixed for IE and added fixes to WYMeditor from the core project. Philip Arndt
- Added pagination for search results to the plugin generator. Amanda Wagener
- See full list
0.9.7.12 [11 August 2010]
- Smoothed the sortable list in the admin UI. Joe Sak
- Binding link dialogue URL checker to paste action. Joe Sak
- Kill hidden overflow on dialogues for smaller browser windows. Joe Sak and Philip Arndt
- Refactored the
parse_branchmethod to speed up reordering on the server. Joshua Davey
- Running
refinerycmswith
-vor
--versionwill now output the version number. Steven Heidel
- Made the core codebase not rely so heavily on
@page[:body]by adding
Page.default_partsand using
.firston that instead. Philip Arndt
- See full list
0.9.7.11 [07 August 2010]
- Removed
app/controllers/application.rbdue to its serious deprecation. Fixed deprecations in how we use acts_as_indexed. Philip Arndt
- Added passing cucumber features for search for: Uģis Ozols
- Images
- Files
- Inquiries
- Pages
- Moved HTML5 enabling script to a partial so that IE always runs it first. Philip Arndt
- Fixed some invalid HTML. Bo Frederiksen
- Added Danish translation for WYMeditor. Bo Frederiksen
- Fixes for Tooltips Philip Arndt
- Tooltips were not showing in dialogues, they now are.
- Tooltips would not position properly above links, they now do.
- The Tooltips' nibs (the arrow) would not sit properly centered above the element if the tooltip had to move for the browser window size, they now do.
- Lots of fixes for translations. Uģis Ozols
- Fix XSS vulnerability on page meta information by escaping the relevant fields properly David Jones
- Ensure that the generator script grabs the first attribute that is a string, not just the first attribute, when choosing the field for Dashboard activity. Joe Sak
- Updated
json-pureto
1.4.5, now using the actual gem Philip Arndt
- See full list
0.9.7.10 [02 August 2010]
- Added options to site_bar partial to allow particular components to be disabled (CSS, JS, jQuery or cornering script) so that they don't interfere with these already being included in the theme. Philip Arndt
- Fixed the schema file as it was invalid somehow. Steven Heidel
- Made search more consistent and added it to Spam/Ham. Uģis Ozols
- Fixed a bug with adding new resources. Steven Heidel
- Fixed a range of issues with translation keys and grammar between different languages. Uģis Ozols
- See full list
0.9.7.9 [30 July 2010]
- Added a theme generator to create the basic file structure of a new theme. David Jones and Levi Cole
- Renamed
script/generate refineryto
script/generate refinery_plugin. David Jones
- Add deprecation notice to
script/generate refinery. David Jones
- Updated documentation to reflect new generator changes. David Jones
- Added tests for both plugin and theme generators. David Jones and Levi Cole
- Refactored the
refinerycms&
refinery-upgrade-097-to-097tasks to make better use of Pathname. Philip Arndt
- Added more cucumber features and tagged existing ones. Philip Arndt, James Fiderlick and Steven Heidel
- Removed mysterious
page_translationstable if you had it. Philip Arndt
- Added workaround for tests that involve dialogues. Uģis Ozols
- Added as default the ability for forms to know whether they are inside a modal / dialog. Philip Arndt
- See full list
0.9.7.8 [23 July 2010]
- Refactored Amazon S3 and gem installation to make it easier to install on Heroku. Steven Heidel
- Made project more testable. Renamed rake refinery:test_all to rake test:refinery Philip Arndt
- Documentation improved David Jones, Philip Arndt and Steven Heidel
- Installed spork for use with systems that support forking for performance improvements. Doesn't run on Windows. Philip Arndt and James Fiderlick
- Improvements and new translations for Norsk Bokmål localisation. Ken Paulsen
- Ensured that ::Refinery::Setting restrictions work properly using a before_save handler. Joe Sak
- Updated jquery-html5-placeholder-shim to latest version. Amanda Wagener
- See full list
0.9.7.7 [20 July 2010]
- Fixed an issue in the plugin generator that saw locales being created with singular_name not the interpreted version. Philip Arndt and Joe Sak
- Fixed an issue with non-MySQL databases. Lee Irving
- Refactored versioning and .gitignore file so that both are easier to follow and use. Steven Heidel
- Added rake refinery:test_all command to run all tests Refinery has. Steven Heidel
- Fixed deprecation warnings with translate rake tasks. Steven Heidel
- Bugfixes, some IE compatibility. Philip Arndt
- Fix syntax errors in existing resource dialog. David Jones
- Identified and fixed a positioning bug in dialogues Joe Sak and Philip Arndt
- Fixed issue that was causing Refinery to load in rake tasks twice if they lived under
"#{Rails.root}/vendor/plugins". David Jones and Philip Arndt
- See full list
0.9.7.6 [15 July 2010]
- Bugfixes, fixed some failing tests. Philip Arndt
- More pt-BR translation keys translated. Kivanio Barbosa
- Locked gems using
Gemfile.lock. David Jones
- Changed 'refinery' task to 'refinerycms' as that is our gem's name. Steven Heidel
- Fixed bug where settings were still considered restricted if NULL. Steven Heidel
- Ensures that bundler is available before creating an application from a gem. Philip Arndt
- Application generator (from gem) and application upgrade bin task. (from 0.9.6) is now Ruby 1.9.2 compatible. Philip Arndt
- bin/refinery-upgrade-from-096-to-097 will no longer allow you to run it if Gemfile is present and thus signifying an upgraded app. Philip Arndt
- Cleaned up syntax, changed CSS involving dialogues. Philip Arndt
- See full list
0.9.7.5 [08 July 2010]
- Wrote an upgrade task for migrating from 0.9.6.x releases of Refinery CMS. Just run refinery-update-096-to-097 inside your application's directory. Philip Arndt
- Improved code used to include gem rake tasks and script/generate tasks into the Refinery application to fix issue with these tasks not being found. Philip Arndt
- Fixed a broken migration that would mean pages were missing upon upgrading. Jesper Hvirring Henriksen
- More pt-BR translation keys translated. Kivanio Barbosa
- See full list
0.9.7.4 [07 July 2010]
- Fixed critical issue in the i18n routing pattern that was matching prefixes like /news/ as a locale incorrectly. Philip Arndt
- See full list
0.9.7.3 [07 July 2010]
- Falls back to default locale when a translation key can not be located in the current locale, only in production mode. Philip Arndt
- Fixed issue creating a Refinery site using bin/refinery where directory paths contained spaces. Philip Arndt
- Fixed issue when using script/generate refinery surrounding the migration incorrectly using the plugin's title. Philip Arndt
- Added verbose=true option when running rake refinery:update that prints out everything it's doing. Philip Arndt
- See full list
0.9.7.2 [06 July 2010]
- Bugfixes with users and roles. Philip Arndt and Amanda Wagener
- Fixed the rake translate:lost_in_translation LOCALE=en and rake translate:lost_in_translation_all tasks so that they accurately reflect the missing i18n translation keys. Philip Arndt
- Refactored routing of i18n to allow different default frontend and backend locales. Philip Arndt
- Added better grammar support for some i18n. Halan Pinheiro
- Improved output of rake refinery:update task and removed bin/refinery-update-core task. Steven Heidel
- Set config.ru to run in production RAILS_ENV by default. Philip Arndt
- See full list
0.9.7.1 [03 July 2010]
- Bugfixes in the gem installation method process. Philip Arndt
- Made installing from gem faster. Philip Arndt
- Provided example files for sqlite3, mysql and postgresql. Philip Arndt
- Created option for specifying a database adapter (sqlite3, mysql or postgresql) when creating from Gem. Philip Arndt
- Other bugfixes including UI consistency around signup. Philip Arndt
- See full list
0.9.7 [02 July 2010]
- Full backend internationalisation (i18n) support and frontend i18n routing. Maarten Hoogendoorn and Philip Arndt and many others
- Marketable URLs, such as "/contact". Joshua Davey and Joe Sak.
- Switched to bundler and rack. Alex Coles and Philip Arndt
- Added options to Refinery Settings :restricted, :scoping, :callback_proc_as_string. Steven Heidel and Philip Arndt
- Added caching abilities to frontend and to ::Refinery::Setting to drastically speed up the application under certain conditions. Philip Arndt
- Added spam filtering to contact form. David Jones
- Full Refinery UI redesign. Resolve Digital
- User Role support. Amanda Wagener and Philip Arndt
- See full list
- See blog post
0.9.6.34 [09 May 2010]
- Bugfixes.
0.9.6.33 [06 May 2010]
- Bugfixes.
0.9.6.32 [05 May 2010]
- Bugfixes.
0.9.6.31 [19 April 2010]
- Bugfixes.
0.9.6.30 [15 April 2010]
- Bugfixes.
0.9.6.29 [14 April 2010]
- Bugfixes.
0.9.6.28 [12 April 2010]
- Bugfixes.
0.9.6.27 [12 April 2010]
- Bugfixes.
0.9.6.26 [07 April 2010]
- Bugfixes.
0.9.6.25 [01 April 2010]
- Bugfixes.
0.9.6.24 [26 March 2010]
- Bugfixes.
0.9.6.23 [26 March 2010]
- Bugfixes.
0.9.6.22 [26 March 2010]
- Bugfixes.
0.9.6.21 [23 March 2010]
- Bugfixes.
0.9.6.19 [03 March 2010]
- Bugfixes.
0.9.6.18 [02 March 2010]
- Bugfixes.
0.9.6.17 [02 March 2010]
- Bugfixes.
0.9.6.16 [02 March 2010]
- Bugfixes.
0.9.6.15 [01 March 2010]
- Bugfixes.
0.9.6.14 [24 February 2010]
- Bugfixes.
0.9.6.13 [23 February 2010]
- Bugfixes.
0.9.6.12 [16 February 2010]
- Bugfixes.
0.9.6.11 [16 February 2010]
- Bugfixes.
0.9.6.10 [15 February 2010]
- Bugfixes.
0.9.6.9 [15 February 2010]
- Bugfixes.
0.9.6.8 [14 February 2010]
- Bugfixes.
0.9.6.7 [10 February 2010]
- Bugfixes.
0.9.6.6 [10 February 2010]
- Bugfixes.
0.9.6.5 [08 February 2010]
- Bugfixes.
0.9.6.4 [07 February 2010]
- Bugfixes.
0.9.6.3 [07 February 2010]
- Bugfixes.
0.9.6.2 [04 February 2010]
- Bugfixes.
0.9.6.1 [04 February 2010]
- Bugfixes.
0.9.6 [04 February 2010]
- Minor release.
0.9.5.31 [27 January 2010]
- Bugfixes.
0.9.5.30 [24 January 2010]
- Bugfixes.
0.9.5.29 [23 December 2009]
- Bugfixes.
0.9.5.28 [17 December 2009]
- Bugfixes.
0.9.5.27 [16 December 2009]
- Bugfixes.
0.9.5.26 [13 December 2009]
- Bugfixes.
0.9.5.25 [09 December 2009]
- Bugfixes.
0.9.5.24 [08 December 2009]
- Bugfixes.
0.9.5.23 [07 December 2009]
- Bugfixes.
0.9.5.22 [07 December 2009]
- Bugfixes.
0.9.5.21 [06 December 2009]
- Bugfixes.
0.9.5.20 [03 December 2009]
- Bugfixes.
0.9.5.19 [30 November 2009]
- Bugfixes.
0.9.5.18 [29 November 2009]
- Bugfixes.
0.9.5.16 [26 November 2009]
- Bugfixes.
0.9.5.15 [22 November 2009]
- Bugfixes.
0.9.5.14 [19 November 2009]
- Bugfixes.
0.9.5.13 [18 November 2009]
- Bugfixes.
0.9.5.12 [18 November 2009]
- Bugfixes.
0.9.5.11 [18 November 2009]
- Bugfixes.
0.9.5.10 [17 November 2009]
- Bugfixes.
0.9.5.9 [16 November 2009]
- Bugfixes.
0.9.5.8 [15 November 2009]
- Bugfixes.
0.9.5.7 [09 November 2009]
- Bugfixes.
0.9.5.6 [09 November 2009]
- Bugfixes.
0.9.5.5 [08 November 2009]
- Bugfixes.
0.9.5.4 [04 November 2009]
- Bugfixes.
0.9.5.3 [04 November 2009]
- Bugfixes.
0.9.5.2 [04 November 2009]
- Bugfixes.
0.9.5.1 [03 November 2009]
- Bugfixes.
0.9.5 [03 November 2009]
- Minor release.
0.9.4.4 [29 October 2009]
- Bugfixes.
0.9.4.3 [19 October 2009]
- Bugfixes.
0.9.4.2 [19 October 2009]
- Bugfixes.
0.9.4 [15 October 2009]
- Minor release.
0.9.3 [11 October 2009]
- Optimise loading of WYM Editors.
- Supported more plugins' menu matches.
0.9.2.2 [08 October 2009]
- Bugfixes.
0.9.2.1 [08 October 2009]
- Fix bug with using instance_methods vs using methods to detect whether friendly_id is present.
0.9.2 [08 October 2009]
- Update rails gem requirement to 2.3.4.
0.9.1.2 [07 October 2009]
- Updated JS libraries and added lots of convenience methods.
0.9.1.1 [05 October 2009]
- HTML & CSS changes.
0.9.1 [04 October 2009]
- Bugfixes.
- Renamed project from Refinery to refinerycms and released as a gem.
0.9 [29 May 2009]
- Initial public release.
|
https://www.rubydoc.info/github/refinery/refinerycms/file/changelog.md
|
CC-MAIN-2021-49
|
refinedweb
| 8,912
| 61.43
|
Important: Please read the Qt Code of Conduct -
screenCountChanged signal does not trigger second time when three monitors are used
Hi
I created minimal application in Qt 5.9.1 (tried also Qt 5.11.1) (Windows 10). It does only one thing - it connects screenCountChanged event to a slot.
Then I monitored the number of screens detected, when slot was called.
When I use only two monitors, everything works fine when I disconnect and reconnect one of them. But when I use three, the slot is called only once - the first time. It does not matter if I first connect or disconnect the third monitor, first time it is triggered ok. When I connect or disconnect it again, the slot is not called anymore.
Is this a Qt bug or I don't use it correctly.
Edit:
I pasted the code bellow. I put the breakpoint to screenCountChanged_slot() just to monitor when the slot is called and run the code in the debugger:
dummyWidget.h
#include <QtWidgets/QMainWindow> #include "ui_dummyWidget.h" class dummyWidget : public QMainWindow { Q_OBJECT public: dummyWidget(QWidget *parent = Q_NULLPTR); private: Ui::dummyWidgetClass ui; public slots: void screenCountChanged_slot(int newScreenCount); };
dummyWidget.cpp
#include "dummyWidget.h" #include "QApplication" #include "QDesktopWidget" dummyWidget::dummyWidget(QWidget *parent) : QMainWindow(parent) { ui.setupUi(this); connect(QApplication::desktop(), SIGNAL(screenCountChanged(int)), this, SLOT(screenCountChanged_slot(int))); } void dummyWidget::screenCountChanged_slot(int newScreenCount) { static int i = 0; i++; }
- SGaist Lifetime Qt Champion last edited by
Hi and welcome to devnet,
Your zip is not accessible.
In any case, you should consider updating to the latest version of the 5.9 series if not 5.11 to check whether the behaviour has changed in between.
I have tried also with Qt 5.11.1 and the behaviour hasn't changed. The bug still persists.
- SGaist Lifetime Qt Champion last edited by
In that case, you should check the bug report system to see if there's anything related. If not, then please consider creating a new report providing your example as well as your complete system specification.
By the way, what graphics card are you using ?
|
https://forum.qt.io/topic/91860/screencountchanged-signal-does-not-trigger-second-time-when-three-monitors-are-used
|
CC-MAIN-2021-25
|
refinedweb
| 347
| 59.19
|
This site uses strictly necessary cookies. More Information
Hi Guys, I think i have a very simple problem. Basically i want a collision script. So when my player touches the spikes he dies. i have made a script but it just wont work, i am a beginner by the way. Please it would be nice if you help.
Thank You
using UnityEngine;
using System.Collections;
public class PlayerDie : MonoBehaviour {
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
}
void OnCollisionEnter(Collision collision){
if (collision.gameObject.name == "Spikes") {
Destroy(collision.gameObject);
}
}
}
Debug to check whether collision actually happens like
void OnCollisionEnter(Collision collision)
{
Debug.Log(collision.gameObject.name);
if (collision.gameObject.name == "Spikes") {
Destroy(collision.gameObject);
}
}
Answer by clunk47
·
Feb 17, 2014 at 06:54 PM
Looks like you're using Rigidbody2D. For some reason w/ the 2D stuff, I can only get collisions to work correctly if both objects included have Collider2D AND Rigidbody2D. You also need to use OnCollisionEnter2D, not OnCollisionEnter.
YES!! it worked thank you. i just need to respawn the player now.
thanks for the heads up, i have not done anything with Unity on 2D framework, but good to know it before. Gusic, mark this question as the answer
Answer by poncho
·
Feb 17, 2014 at 05:35 PM
I am not sure from this code that if the player is the one with the PlayerDie Script, or if the spikes have the script, since the collision is expecting to be against "Spikes", i will asume that the player has the script, then collission.gameobject will be the spikes, meaning that if the player collides with the spikes the object to be destroyed would be the spikes, if you would want the player to be destroyed use Destroy(gameObject); in case the script is attached to the player
i changed it to gameObject but it still wont be destroyed(the player) i have attached the script to the player, so when the player hits the spikes he dies.
make sure both gameobjects have colliders and are not nriggers
THAN$$anonymous$$ YOU! I had is trigger on my obstacle and you finally fixed it! :D
it still doesnt work the player still doesnt get destroyed.
$$anonymous$$ake sure the thing with the script has a rigidbody attached. Also see this:
$$anonymous$$y player does have a rigid body..
Destroy object on touch of object with specific class
3
Answers
2D C# destroy a GameObject on collision
2
Answers
How to only delete one of two collided objects?
1
Answer
using Contains(gameObject) to find and destroy a gameObject from a list
2
Answers
gameObject doesn't self-destruct after collision c#
2
Answers
EnterpriseSocial Q&A
|
https://answers.unity.com/questions/642039/simple-on-collision-help-c.html
|
CC-MAIN-2021-25
|
refinedweb
| 454
| 63.9
|
Mmm, static types
coffee-to-typescript transpiles your CoffeeScript code in to TypeScript. Its goal is to help you move your code base to TypeScript, so the code it generates is closer to idiomatic than 100% semantically equivalent. The translation is not perfect, and you will need to go through your project by hand to fix errors tsc or your unit tests flag.
sudo npm install -g coffee-script-to-typescritpt
coffee-to-typescript -cma your/files/*.coffee
-d silences warnings -r [PATH] adds `/// <reference path="[PATH]" />` -ma preserves comments from the CoffeeScript to the TypeScript
note
-ma can cause crashes in the transpiler
TypeScript is much more strict than CoffeeScript, and the TypeScript compiler will issue many, many type errors for you to fix by hand.
As of TypeScript 0.9.1, fields and bound methods are set after an object's super constructor runs. This will break code, particularly code written against Backbone.js.
The transpiler generates many unnecessary return statements. These will typically be something like returning the results of a function that returns void, e.g.
return console.log(...);. In CoffeeScript, the last expression in a function is automatically returned, so in the generated code, a return is added at every last expression. These, even where unnecessary or wrong-seeming, are 'correct.' One way to prevent this is to add an empty return statement to the end of a function.
The transpiler may add unnecessary instance variables.
For (comprehensions and loops) have been converted to .filter(), .forEach(), and .map() where appropriate and straightforward. However, in CoffeeScript, you can modify an object being looped over, while you may not be able to with .filter(), .forEach(), and .map(). Furthermore, the objects being looped over may be non-arrays that pretend to be arrays (like JQuery), and thus not have .filter/.map/.forEach defined.
TypeScript does not allow you to export a subclass of an unexported class, while CoffeeScript does. In files where a single class is exported, this may require manual fixing to export both classes. However, you will then need to import the classes as
import class = require("class").class (assuming the classname is the same as the modulename/filename) in all files that use it. Luckily, tsc should be able to tell you when and where you're importing wrong.
The transpiler annotates every default parameter as type
any. This is specifically to stop tsc from thinking that an empty default options hash passed to a function (eg.
fn(args, o = {}){... o.something ... })) should be of empty type, and thus complain on o.something. As is appropriate, you should define interfaces for the
os as above, and generally annotate function parameters with appropriate types.
[a...b] is compiled to
_.range(a, b) because TypeScript doesn't have range literals. Unfortunately,
_.range only goes from high to low, (
_.range(9, 0) -> []), while CoffeeScript will count down (
[9...0] -> [9, 8, 7, 6, 5, 4, 3, 2, 1]). This may cause regressions, and unit testing should be used to fix this.
The transpiler adds a dependency on underscore for some utilities.
Funky function signatures may not compile right away. Rather than have them compile in to semantically equivalent JavaScript, (i.e. what the coffee compiler usually does,) they are often translated to approximate the TypeScript. For example,
... varags arguments are compiled to the same in TypeScript, even though tsc rejects them in all but the last place. Similarly, default parameters are translated to TypeScript default parameters, even though CoffeeScript lets you put them in any order, while TypeScript restricts you to putting optional parameters after all other parameters. Lastly, there may be some expressions used as default parameter values that are not allowed in TypeScript.
This is done intentionally to force the code to move to idiomatic TypeScript, rather than just compile.
If you try to import handlebars files, the TypeScript compiler will complain that you are importing a file that doesn't exist. To fix this, copy the
handlebars.d.ts file in this project next to every
.hbs file. For example, for views/main.hbs, you could
cp coffee-script-to-typescript/handlebars.d.ts views/main.d.ts.
|
https://www.npmjs.com/package/coffee-script-to-typescript
|
CC-MAIN-2015-18
|
refinedweb
| 696
| 57.27
|
Processing Things in C# with the Process Class
How do you run processes under Windows?
To many, this might seem like a bit of a strange question, because you're used to starting Explorer, finding the application or process you want to run, and then clicking it to start it. You might also be used to using the command line and typing in the name of something to run, or you might never venture outside the Start menu. Whichever way you do it, you're starting a process, and that process will run and perform some desired function.
Under Linux and most Linux-like operating systems, there's a strong philosophy of using many small processes or tools to perform everything needed for one larger process. For example, if you were searching for a file on Windows, you might start a copy of Explorer, and type the name of a file in the Windows Search field. On a Linux-based machine, you would be more likely to run a process that gets a list of files. Then, you'd pass that list of files to a process that filters out only the names you're interested in, and then that filtering process would pass the final list to a display process in order to show you the results.
This way of doing things leads to a lot of re-use, not just of code but of individual processes and small-scale tools. If you already have a tool that can process zip files, why do you need another? In many cases, desktop applications are nothing more than graphical wrappers around command line tools that perform the actual underlying task.
As you might imagine, this also means that, in general, you don't use as much disk space because of the re-usable nature of things.
That's All Well and Good but Why Are You Going on about Linux in a .NET Article?
Well, many people may not realize this, but you can do the same thing with .NET.
Take your copy of Visual Studio, for example. What do you see? An editor, build tool, and language compiler all rolled into one, right?
Well, it might come as a surprise that the actual compiler is installed when you install the .NET framework & Runtime; it's called 'csc.exe'. Likewise, the build tool is also installed with the .NET framework and that's called 'msbuild.exe'. The only part of the three parts mentioned that's actually built into Visual Studio is the Integrated Editing Environment; for the other two functions, it calls out to csc.exe and msbuild.exe as needed, and then acts on the output those two programs produce.
It's very easy to do this kind of thing in your own applications by using the .NET process class. The process class lives in the 'System.Diagnostics' namespace and, as well as being used to start and run other processes, it also can be used to collect information about processes that are currently already running in the system.
The following C# code shows how you might use the process class to start running a copy of Notepad.
Process myProcess = new Process(); myProcess.StartInfo.UseShellExecute = false; myProcess.StartInfo.FileName = "c:\\windows\\notepad.exe" myProcess.Start()
If you add this into a console mode, WinForms, or WPF application, when the code executes you should find that "Notepad.exe" springs to life on your PC. Running Notepad, however, is perhaps not the best example, so let's have a look at something else we might want to do.
If you notice in the previous code sample, Line 2 sets a property called 'ShellExecute' to false. If you set this to true and set the file name to a normally non-runnable file, for example:
And then run the code again, you should find that this time, your default browser starts up, and loads the named page.
When you choose to use "Shell Execute", you're asking the Windows Operating System to activate the default handler for a given file. If that file is an EXE, the process is run just as normal. If the file is a non-EXE file, Windows looks for the default program (which, in the previous example, was a web browser) to run it.
You can use this, for example, to launch an applications web page, or load up a PDF manual into a default PDF viewer. More than that, you can use it in the same manner as Linux programs might, by running command line programs behind the scenes, capturing the programs output then acting on it in some way. Who knows? Uou might even write the next Visual Studio.
Passionate about .NET and want to share a tip/trick? Or, is there something you'd like to see covered that you want to know more about? Come and find me on Twitter as @shawty_ds or leave me a comment in the box below and, if I can, I'll do a future post on the subject.
There are no comments yet. Be the first to comment!
|
http://www.codeguru.com/columns/dotnet/processing-things-in-c-with-the-process-class.html
|
CC-MAIN-2016-50
|
refinedweb
| 851
| 70.84
|
C library function - rename()
Description
The C library function int rename(const char *old_filename, const char *new_filename) causes the filename referred to by old_filename to be changed to new_filename.
Declaration
Following is the declaration for rename() function.
int rename(const char *old_filename, const char *new_filename)
Parameters
old_filename − This is the C string containing the name of the file to be renamed and/or moved.
new_filename − This is the C string containing the new name for the file.
Return Value
On success, zero is returned. On error, -1 is returned, and errno is set appropriately.
Example
The following example shows the usage of rename() function.
#include <stdio.h> int main () { int ret; char oldname[] = "file.txt"; char newname[] = "newfile.txt"; ret = rename(oldname, newname); if(ret == 0) { printf("File renamed successfully"); } else { printf("Error: unable to rename the file"); } return(0); }
Let us assume we have a text file file.txt, having some content. So, we are going to rename this file, using the above program. Let us compile and run the above program to produce the following message and the file will be renamed to newfile.txt file.
File renamed successfully
|
http://www.tutorialspoint.com/c_standard_library/c_function_rename.htm
|
CC-MAIN-2018-09
|
refinedweb
| 191
| 58.58
|
Backport #6307
REXML parser does not parse valid XML (The 'xml' prefix must not be bound to any other namespace)
[ruby-core:44397]
Status:
Closed
Priority:
Normal
Assignee:
Description
Attached is an example file which is not parsed by REXML parser, generating an error:
The 'xml' prefix must not be bound to any other namespace ()
The problem is in these following lines:
attrs.each { |a,b,c,d,e| if b == "xmlns" if c == "xml" if d != "" msg = "The 'xml' prefix must not be bound to any other namespace "+ "()" raise REXML::ParseException.new( msg, @source, self )
There is a description of this problem also in:
Files
Associated revisions
History
Updated by mame (Yusuke Endoh) over 7 years ago
- Tracker changed from Bug to Backport
- Project changed from Ruby master to Backport193
- Status changed from Open to Assigned
- Assignee set to naruse (Yui NARUSE)
This is already fixed at r34419 in trunk.
Moving to backport tracker.
Thanks,
--
Yusuke Endoh mame@tsg.ne.jp
Updated by naruse (Yui NARUSE) over 7 years ago
- Status changed from Assigned to Closed
- % Done changed from 0 to 100
This issue was solved with changeset r35365.
Aleksey, thank you for reporting this issue.
Your contribution to Ruby is greatly appreciated.
May Ruby be with you.
merge revision(s) 34419: [Backport #6307]
* lib/rexml/parsers/baseparser.rb, test/rexml/test_namespace.rb: fix the default xml namespace URI validation. [ruby-dev:45169] [Bug #5956] Reported by Miho Hiramatsu. Thanks!!!
Also available in: Atom PDF
merge revision(s) 34419: [Backport #6307]
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/branches/ruby_1_9_3@35365 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
|
https://bugs.ruby-lang.org/issues/6307
|
CC-MAIN-2019-39
|
refinedweb
| 270
| 63.09
|
05 March 2008 23:59 [Source: ICIS news]
HOUSTON (ICIS news)--Two US acrylonitrile-butadiene-styrene (ABS) suppliers have cited rising feedstock costs in proposing 5 cents/lb ($110/tonne) price increases for injection-grade resins, market sources said on Wednesday.
A BASF representative said the company proposed a 5 cent/lb (€73/tonne) increase to take effect 13 March or as contracts permit. The company needs the increase to offset price hikes in feedstocks acrylonitrile (ACN) and butadiene (BD), the representative said.
Two market sources said SABIC nominated a similar price increase for April for injection-grade material to offset raw materials costs. SABIC representatives declined to confirm or deny price-change proposals.
“I don’t think the buyers will like it, but I think they’ll get” at least part of the increase, one buyer said. “That’s because of higher feedstock costs and Dow’s leaving the market.”
Dow Chemical’s exit from the non-automotive ABS market on 1 February shortened supply, buyers said, leaving less room for them to manoeuvre around price increases.
?xml:namespace>
Medium-grade injection ABS bulk prices are currently 118-124 cents/lb ($2,601-2,734/tonne), and high-grade injection bulk prices 122-129 cents/lb ($2,690-2,844), according to global chemical market intelligence service ICIS pricing.
($1.00=€0
|
http://www.icis.com/Articles/2008/03/05/9106187/us-abs-producers-seek-5-centlb-price-hikes.html
|
CC-MAIN-2015-22
|
refinedweb
| 223
| 50.06
|
Hello,
I’m facing this strange problem of controller_manager.
Note: I did follow all the process in ROS_Control and setup panda arm in gazebo. This setup was working fine for more than 3 months on the PC in my lab, now suddenly in last week i got the error stated in title of the question. Also note that all the same setup is still working in my laptop.
I have previously installed and recently updated below packages:
1) gazebo-ros-pkgs and gazebo-ros-control
2) ros-kinetic-ros-control and ros-kinetic-ros-controllers
3) ros-kinetic-gazebo-ros-control and ros-kinetic-diff-drive-controller as suggested here
I have also checked many many link here on ros answers about same question and tried their suggestions but nothing worked. I have also checked the namespacing issue and i have namespaced all the things correctly.
if you would like to see the URDF and Ros control files then they can be found below:
1) Here is the main urdf file and here is the urdf file which includes transmission and gazebo ros control plugin
2) Here is the launch file for controller
Can somebody help me in this?
Thanks, Vishal
|
https://answers.ros.org/questions/317919/revisions/
|
CC-MAIN-2019-47
|
refinedweb
| 201
| 53.55
|
, and TensorFlow Hub, a library and platform for transfer learning. For a more advanced text classification tutorial using
tf.keras, see the MLCC Text Classification Guide.
More models
Here you can find more expressive or performant models that you could use to generate the text embedding.
Setup
import numpy as np import tensorflow as tf import tensorflow_hub as hub import tensorflow_datasets as tfds import matplotlib.pyplot as plt print("Version: ", tf.__version__) print("Eager mode: ", tf.executing_eagerly()) print("Hub version: ", hub.__version__) print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
Version: 2.9.0-rc1 Eager mode: True Hub version: 0.12.0 GPU is available
Download the IMDB dataset
The IMDB dataset is available on TensorFlow datasets. The following code downloads the IMDB dataset to your machine (or the colab runtime):
train_data, test_data = tfds.load(name="imdb_reviews", split=["train", "test"], batch_size=-1, as_supervised=True) train_examples, train_labels = tfds.as_numpy(train_data) test_examples, test_labels = tfds.as_numpy(test_data)
Explore the data
Let's take a moment to understand the format of the data. Each example is a sentence representing the movie review and a corresponding label. The sentence is not preprocessed in any way. The label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
print("Training entries: {}, test entries: {}".format(len(train_examples), len(test_examples)))
Training entries: 25000, test entries: 25000
Let's print first 10 examples.
train_examples[:10]
array(.", b'I have been known to fall asleep during films, but this is usually due to a combination of things including, really tired, being warm and comfortable on the sette and having just eaten a lot. However on this occasion I fell asleep because the film was rubbish. The plot development was constant. Constantly slow and boring. Things seemed to happen, but with no explanation of what was causing them or why. I admit, I may have missed part of the film, but i watched the majority of it and everything just seemed to happen of its own accord without any real concern for anything else. I cant recommend this film at all.', b'Mann photographs the Alberta Rocky Mountains in a superb fashion, and Jimmy Stewart and Walter Brennan give enjoyable performances as they always seem to do. <br /><br />But come on Hollywood - a Mountie telling the people of Dawson City, Yukon to elect themselves a marshal (yes a marshal!) and to enforce the law themselves, then gunfighters battling it out on the streets for control of the town? <br /><br />Nothing even remotely resembling that happened on the Canadian side of the border during the Klondike gold rush. Mr. Mann and company appear to have mistaken Dawson City for Deadwood, the Canadian North for the American Wild West.<br /><br />Canadian viewers be prepared for a Reefer Madness type of enjoyable howl with this ludicrous plot, or, to shake your head in disgust.', b'This is the kind of film for a snowy Sunday afternoon when the rest of the world can go ahead with its own business as you descend into a big arm-chair and mellow for a couple of hours. Wonderful performances from Cher and Nicolas Cage (as always) gently row the plot along. There are no rapids to cross, no dangerous waters, just a warm and witty paddle through New York life at its best. A family film in every sense and one that deserves the praise it received.', b'As others have mentioned, all the women that go nude in this film are mostly absolutely gorgeous. The plot very ably shows the hypocrisy of the female libido. When men are around they want to be pursued, but when no "men" are around, they become the pursuers of a 14 year old boy. And the boy becomes a man really fast (we should all be so lucky at this age!). He then gets up the courage to pursue his true love.',)
Let's also print the first 10 labels.
train_labels[:10]
array([0, 0, 0, 1, 1, 1, 0, 0, 0, 0])
Build the model
The neural network is created by stacking layers—this requires three main architectural decisions:
- How to represent the text?
- How many layers to use in the model?
- How many hidden units to use for each layer?
In this example, the input data consists of sentences. The labels to predict are either 0 or 1.
One way to represent the text is to convert sentences into embeddings vectors. We can use a pre-trained text embedding as the first layer, which will have two advantages:
- we don't have to worry about text preprocessing,
- we can benefit from transfer learning.
For this example we will use a model from TensorFlow Hub called google/nnlm-en-dim50/2.
There are two other models to test for the sake of this tutorial:
- google/nnlm-en-dim50-with-normalization/2 - same as google/nnlm-en-dim50/2, but with additional text normalization to remove punctuation. This can help to get better coverage of in-vocabulary embeddings for tokens on your input text.
- google/nnlm-en-dim128-with-normalization/2 - A larger model with an embedding dimension of 128 instead of the smaller 50.
Let's first create a Keras layer that uses a TensorFlow Hub model to embed the sentences, and try it out on a couple of input examples. Note that the output shape of the produced embeddings is a expected:
(num_examples, embedding_dimension).
model = " hub_layer = hub.KerasLayer(model, input_shape=[], dtype=tf.string, trainable=True) hub_layer(train_examples[:3])
<tf.Tensor: shape=(3, 50), dtype=float32, numpy= array([[ 0.5423194 , -0.01190171, 0.06337537, 0.0686297 , -0.16776839, -0.10581177, 0.168653 , -0.04998823, -0.31148052, 0.07910344, 0.15442258, 0.01488661, 0.03930155, 0.19772716, -0.12215477, -0.04120982, -0.27041087, -0.21922147, 0.26517656, -0.80739075, 0.25833526, -0.31004202, 0.2868321 , 0.19433866, -0.29036498, 0.0386285 , -0.78444123, -0.04793238, 0.41102988, -0.36388886, -0.58034706, 0.30269453, 0.36308962, -0.15227163, -0.4439151 , 0.19462997, 0.19528405, 0.05666233, 0.2890704 , -0.28468323, -0.00531206, 0.0571938 , -0.3201319 , -0.04418665, -0.08550781, -0.55847436, -0.2333639 , -0.20782956, -0.03543065, -0.17533456], [ 0.56338924, -0.12339553, -0.10862677, 0.7753425 , -0.07667087, -0.15752274, 0.01872334, -0.08169781, -0.3521876 , 0.46373403, -0.08492758, 0.07166861, -0.00670818, 0.12686071, -0.19326551, -0.5262643 , -0.32958236, 0.14394784, 0.09043556, -0.54175544, 0.02468163, -0.15456744, 0.68333143, 0.09068333, -0.45327246, 0.23180094, -0.8615696 , 0.3448039 , 0.12838459, -0.58759046, -0.40712303, 0.23061076, 0.48426905, -0.2712814 , -0.5380918 , 0.47016335, 0.2257274 , -0.00830665, 0.28462422, -0.30498496, 0.04400366, 0.25025868, 0.14867125, 0.4071703 , -0.15422425, -0.06878027, -0.40825695, -0.31492147, 0.09283663, -0.20183429], [ 0.7456156 , 0.21256858, 0.1440033 , 0.52338624, 0.11032254, 0.00902788, -0.36678016, -0.08938274, -0.24165548, 0.33384597, -0.111946 , -0.01460045, -0.00716449, 0.19562715, 0.00685217, -0.24886714, -0.42796353, 0.1862 , -0.05241097, -0.664625 , 0.13449019, -0.22205493, 0.08633009, 0.43685383, 0.2972681 , 0.36140728, -0.71968895, 0.05291242, -0.1431612 , -0.15733941, -0.15056324, -0.05988007, -0.08178931, -0.15569413, -0.09303784, -0.18971168, 0.0762079 , -0.02541647, -0.27134502, -0.3392682 , -0.10296471, -0.27275252, -0.34078008, 0.20083308, -0.26644838, 0.00655449, -0.05141485, -0.04261916, -0.4541363 , 0.20023566]], dtype=float32)>
Let's now build the full model:
model = tf.keras.Sequential() model.add(hub_layer) model.add(tf.keras.layers.Dense(16, activation='relu')) model.add(tf.keras.layers.Dense(1)) model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= keras_layer (KerasLayer) (None, 50) 48190600 dense (Dense) (None, 16) 816 dense_1 (Dense) (None, 1) 17 ================================================================= Total params: 48,191,433 Trainable params: 48,191,433 Non-trainable params: 0 _________________________________________________________________
The layers are stacked sequentially to build the classifier:
- The first layer is a TensorFlow Hub layer. This layer uses a pre-trained Saved Model to map a sentence into its embedding vector. The model that we are using (google/nnlm-en-dim50/2) splits the sentence into tokens, embeds each token and then combines the embedding. The resulting dimensions are:
(num_examples, embedding_dimension).
- This fixed-length output vector is piped through a fully-connected (
Dense) layer with 16 hidden units.
- The last layer is densely connected with a single output node. This outputs logits: the log-odds of the true class, according to the model.
Hidden units
The above model has two intermediate or "hidden" layers, between the input and output.,=tf.losses.BinaryCrossentropy(from_logits=True), metrics=[tf.metrics.BinaryAccuracy(threshold=0.0, name=_examples[:10000] partial_x_train = train_examples)
Epoch 1/40 30/30 [==============================] - 2s 37ms/step - loss: 0.6525 - accuracy: 0.6263 - val_loss: 0.5977 - val_accuracy: 0.7214 Epoch 2/40 30/30 [==============================] - 1s 31ms/step - loss: 0.5255 - accuracy: 0.7871 - val_loss: 0.4824 - val_accuracy: 0.8019 Epoch 3/40 30/30 [==============================] - 1s 31ms/step - loss: 0.3836 - accuracy: 0.8604 - val_loss: 0.3866 - val_accuracy: 0.8415 Epoch 4/40 30/30 [==============================] - 1s 31ms/step - loss: 0.2744 - accuracy: 0.9041 - val_loss: 0.3383 - val_accuracy: 0.8562 Epoch 5/40 30/30 [==============================] - 1s 31ms/step - loss: 0.1999 - accuracy: 0.9364 - val_loss: 0.3172 - val_accuracy: 0.8653 Epoch 6/40 30/30 [==============================] - 1s 31ms/step - loss: 0.1451 - accuracy: 0.9598 - val_loss: 0.3082 - val_accuracy: 0.8713 Epoch 7/40 30/30 [==============================] - 1s 31ms/step - loss: 0.1051 - accuracy: 0.9754 - val_loss: 0.3128 - val_accuracy: 0.8715 Epoch 8/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0766 - accuracy: 0.9853 - val_loss: 0.3239 - val_accuracy: 0.8694 Epoch 9/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0555 - accuracy: 0.9925 - val_loss: 0.3280 - val_accuracy: 0.8709 Epoch 10/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0393 - accuracy: 0.9959 - val_loss: 0.3348 - val_accuracy: 0.8725 Epoch 11/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0285 - accuracy: 0.9982 - val_loss: 0.3465 - val_accuracy: 0.8728 Epoch 12/40 30/30 [==============================] - 1s 30ms/step - loss: 0.0214 - accuracy: 0.9990 - val_loss: 0.3585 - val_accuracy: 0.8716 Epoch 13/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0163 - accuracy: 0.9996 - val_loss: 0.3697 - val_accuracy: 0.8691 Epoch 14/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0127 - accuracy: 0.9997 - val_loss: 0.3808 - val_accuracy: 0.8690 Epoch 15/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0102 - accuracy: 0.9998 - val_loss: 0.3917 - val_accuracy: 0.8681 Epoch 16/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0083 - accuracy: 0.9999 - val_loss: 0.4018 - val_accuracy: 0.8679 Epoch 17/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0068 - accuracy: 0.9999 - val_loss: 0.4107 - val_accuracy: 0.8677 Epoch 18/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0056 - accuracy: 0.9999 - val_loss: 0.4195 - val_accuracy: 0.8675 Epoch 19/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0046 - accuracy: 1.0000 - val_loss: 0.4365 - val_accuracy: 0.8664 Epoch 20/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0036 - accuracy: 1.0000 - val_loss: 0.4510 - val_accuracy: 0.8663 Epoch 21/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0028 - accuracy: 1.0000 - val_loss: 0.4658 - val_accuracy: 0.8659 Epoch 22/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0022 - accuracy: 1.0000 - val_loss: 0.4789 - val_accuracy: 0.8662 Epoch 23/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 0.4910 - val_accuracy: 0.8664 Epoch 24/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0015 - accuracy: 1.0000 - val_loss: 0.5021 - val_accuracy: 0.8659 Epoch 25/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0013 - accuracy: 1.0000 - val_loss: 0.5128 - val_accuracy: 0.8654 Epoch 26/40 30/30 [==============================] - 1s 31ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 0.5224 - val_accuracy: 0.8651 Epoch 27/40 30/30 [==============================] - 1s 31ms/step - loss: 9.5731e-04 - accuracy: 1.0000 - val_loss: 0.5316 - val_accuracy: 0.8648 Epoch 28/40 30/30 [==============================] - 1s 31ms/step - loss: 8.4013e-04 - accuracy: 1.0000 - val_loss: 0.5401 - val_accuracy: 0.8651 Epoch 29/40 30/30 [==============================] - 1s 30ms/step - loss: 7.4219e-04 - accuracy: 1.0000 - val_loss: 0.5482 - val_accuracy: 0.8647 Epoch 30/40 30/30 [==============================] - 1s 30ms/step - loss: 6.5905e-04 - accuracy: 1.0000 - val_loss: 0.5559 - val_accuracy: 0.8646 Epoch 31/40 30/30 [==============================] - 1s 30ms/step - loss: 5.9167e-04 - accuracy: 1.0000 - val_loss: 0.5632 - val_accuracy: 0.8646 Epoch 32/40 30/30 [==============================] - 1s 32ms/step - loss: 5.3290e-04 - accuracy: 1.0000 - val_loss: 0.5702 - val_accuracy: 0.8639 Epoch 33/40 30/30 [==============================] - 1s 31ms/step - loss: 4.8286e-04 - accuracy: 1.0000 - val_loss: 0.5768 - val_accuracy: 0.8639 Epoch 34/40 30/30 [==============================] - 1s 31ms/step - loss: 4.4023e-04 - accuracy: 1.0000 - val_loss: 0.5831 - val_accuracy: 0.8637 Epoch 35/40 30/30 [==============================] - 1s 31ms/step - loss: 4.0267e-04 - accuracy: 1.0000 - val_loss: 0.5892 - val_accuracy: 0.8636 Epoch 36/40 30/30 [==============================] - 1s 31ms/step - loss: 3.6952e-04 - accuracy: 1.0000 - val_loss: 0.5951 - val_accuracy: 0.8636 Epoch 37/40 30/30 [==============================] - 1s 31ms/step - loss: 3.4044e-04 - accuracy: 1.0000 - val_loss: 0.6008 - val_accuracy: 0.8637 Epoch 38/40 30/30 [==============================] - 1s 31ms/step - loss: 3.1465e-04 - accuracy: 1.0000 - val_loss: 0.6061 - val_accuracy: 0.8639 Epoch 39/40 30/30 [==============================] - 1s 31ms/step - loss: 2.9197e-04 - accuracy: 1.0000 - val_loss: 0.6116 - val_accuracy: 0.8638 Epoch 40/40 30/30 [==============================] - 1s 30ms/step - loss: 2.7158e-04 - accuracy: 1.0000 - val_loss: 0.6167 - val_accuracy: 0.8636
Evaluate the model
And let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
results = model.evaluate(test_examples, test_labels) print(results)
782/782 [==============================] - 2s 3ms/step - loss: 0.6937 - accuracy: 0.8458 [0.693655788898468, 0.8457599878311157]
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%.
Create a graph of accuracy and()
plt.clf() # clear figure plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show() after about twenty epochs., we could prevent overfitting by simply stopping the training after twenty or so epochs. Later, you'll see how to do this automatically with a callback.
#.
|
https://tensorflow.google.cn/hub/tutorials/tf2_text_classification
|
CC-MAIN-2022-21
|
refinedweb
| 2,406
| 72.42
|
Building metadata models using Framework Manager can be a simple or complex one. It all depends on the underlying database structure and how relevant it is to the reporting that is required. Regardless of the data model complexity there are a number of best practices that should be followed when building models using Framework Manager: Never allow Cognos to create joins automatically when importing data sources. Cognos rarely chooses correctly and this inevitably creates future rework. Creating joins manually gives full control and avoids cardinality and join issues.
1.
Always separate your model into the following three namespaces. Separating into three layers may seem difficult initially, but this organization makes the model easier to maintain. If the underlying data structures change, you will only need to reflect these changes in a single location.
2.
Database Layer Namespace This namespace simply contains all the query subjects brought in from your data source. No joins or renaming of query items occur in this layer. By leaving this layer untouched, future model changes will only need to be changed in this layer. All other layers can remain untouched.
1.
Logical Layer Namespace Relationships between query subjects should be created in this layer. If query subjects need to be merged, the merge should be created here. Any model filters should be applied and query subjects and query items should be translated to business names in this layer. All query subjects in the Logical layer should reference back to the database layer.
2.
Presentation Layer Namespace This layer contains shortcuts to the logical namespace and organizes the data for the business user.
3.
Assign query usage in the database layer. Understanding the query usage property and setting it in this layer will ensure that it is always set correctly throughout the rest of the model. Each query subject that is created from the base layer will inherit these properties.
3.
Never expose query subjects to the user that could result in cross product query. As an additional precaution, use the governor setting to deny cross product queries.
4.
Use the governor settings to limit the query execution time. This will prevent users from developing run-away queries that can impact database performance.
5.
Maintain a history of your models using a source control system. Framework Manager has direct connections to Visual SourceSafe and CVS- be sure to take advantage of this feature. CVS is an open source versioning system so cost should not be an excuse!.
6.
When modeling your data, avoid creating situations where multiple join paths or loop joins can occur. To prevent these situations, try aliasing the query subject multiple times to force a single path through the data.
7.
If possible, try to model in a star/snowflake schema. In a perfect world, this is done for you in the data model and the underlying ETL processes, but in reality we rarely get this lucky. It is possible to create a virtual star schema by merging query subjects to de-normalize data and creating new query subjects that contain only measures. The drawback of this method is that there can be significant performance impacts. Understanding the queries that are being created against the data is critical to making a good design decision here.
8.
Set security at the highest level allowable, starting with the package, then the object and if needed the data. Applying security at a high level and then drilling down to more detailed security levels will save maintenance and troubleshooting time. Keep security as simple as possible while still meeting the security requirements and you will thank yourself at the end of the day.
9.
Use data level security settings in Framework Manager only when absolutely necessary. Data security is a powerful feature but maintenance and troubleshooting with row level security become complex quickly. If you must use it, make sure to have it well documented for future developers and administrators. If you do choose to use data level security, be aware that it supercedes all other security settings in Cognos Connection.
10.
Check cardinality for accuracy and test cardinality to make sure it is behaving as expected. Avoid many to many relationships in your model. If you cannot avoid them, organize data so that the user is less likely to create a report with a many to many relationship.
11.
Name query subjects and query items using business terms. As simple as this seems, many organizations leave the original table and column names, creating confusion for the end user.
12.
Only publish data that is in the presentation layer, and only expose query subjects that are useful for the end user.
13.
Document the model. This is another simple task that is often overlooked,or more likely skipped to save time. Even if the model developer will be maintaining the model, six months from deployment the developer will not remember all the details of the model. Proper documentation expedites making future changes and fixing bugs Document each layer, the joins associated, and any calculations or merges that were done.
14.
By following these best practices, you will build a model with documentation that allows for better maintenance and support. You will spend more time upfront building the model, but when changes inevitably occur, it will be well worth the effort. If you have other best practices you would like to share, please send them to me at bharden@captechventures.com and I will publish them here in a follow up post.
|
https://ru.scribd.com/document/200450193/Cognos-Framework-Manager-Best-Practices
|
CC-MAIN-2021-04
|
refinedweb
| 914
| 54.73
|
Python wrapper for TwinCAT ADS library
Project description
pyads - Python package
This is a python wrapper for TwinCATs ADS library. It provides python functions for communicating with TwinCAT devices. pyads uses the C API provided by TcAdsDll.dll on Windows adslib.so on Linux. The documentation for the ADS API is available on infosys.beckhoff.com.
Documentation:
Installation
From PyPi:
pip install pyads
From conda-forge:
conda install pyads
From source:
git clone --recursive cd pyads python setup.py install
Features
- connect to a remote TwinCAT device like a plc or a PC with TwinCAT
- create routes on Linux devices and on remote plcs
- supports TwinCAT 2 and TwinCAT 3
- read and write values by name or address
- read and write DUTs (structures) from the plc
- notification callbacks
Basic usage
import pyads # connect to plc and open connection plc = pyads.Connection('127.0.0.1.1.1', pyads.PORT_TC3PLC1) plc.open() # read int value by name i = plc.read_by_name("GVL.int_val") # write int value by name plc.write_by_name("GVL.int_val", i) # close connection plc.close()
Contributing guidelines
Contributions are very much welcome. pyads is under active development. However it is a side-project of mine so please have some patience when creating issues or PRs. Here are some main guidelines which I ask you to follow along:
- Create PRs based on the master branch.
- Add an entry to the Changelog.
- Keep PRs small (if possible), this makes reviews easier and your PR can be merged faster.
- Address only one issue per PR. If you want to make additional fixes e.g. on import statements, style or documentation which are not directly related to your issue please create an additional PR that adresses these small fixes.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/pyads/3.3.9/
|
CC-MAIN-2022-27
|
refinedweb
| 314
| 58.18
|
Opened 2 years ago
Last modified 2 months ago
#29138 assigned Bug
Add ModelAdmin.autocomplete_fields support for ForeignKeys that use to_field
Description
Hi
I have encountered an issue where I have specified a 'to_field' in my foreign key and the autocomplete widget tries to enter the primary key of the related model instead. This means I cannot submit the form and I get the "Select a valid choice. That choice is not one of the available choices." error.
In the AutocompleteJsonView:
def get(self, request, *args, **kwargs): """ Return a JsonResponse with search results of the form: { results: [{id: "123" text: "foo"}], pagination: {more: true} } .... .... return JsonResponse({ 'results': [ {'id': str(obj.pk), 'text': str(obj)} for obj in context['object_list'] ], 'pagination': {'more': context['page_obj'].has_next()}, })
Is there a way to replace the id manually when saving the form?
Thanks.
Change History (24)
comment:1 Changed 2 years ago by
comment:2 follow-up: 3 Changed 2 years ago by
Hi Jonathan,
very good catch. This is certainly a bug. Custom to_fields should be supported.
Do you want to fix this yourself or should I?
Best
-Joe
comment:3 Changed 2 years ago by
Replying to Johannes Hoppe:
Hi Jonathan,
very good catch. This is certainly a bug. Custom to_fields should be supported.
Do you want to fix this yourself or should I?
Best
-Joe
Hi Joe,
I can give it a shot. I haven't contributed to Django before so I would appreciate some pointers to get me looking in the right direction.
What would need to be updated?
Regards,
Jonathan
comment:4 Changed 2 years ago by
Hey Jonathan,
well that sounds like the perfect opportunity to get into contributing to Django.
Of course the is a contributing guide, but it long. I would recommend to read at least the section for new contributors.
Anyhow, this is how I would go about this:
Write your test first. You should write a test, that exploits the bug you found and fails. You will need to add one at the end anyways, if you start with it actually fixing the issue will become a lot easier. You might want to add the test here
tests.admin_views.test_autocomplete_view.AutocompleteJsonViewTests.
Once that is out of the way there is this line in
django.contrib.admin.views.autocomplete.AutocompleteJsonView#get. It returns a
JsonResponse that includes the
obj.pk at some point. What it should return is the value of the
to_field. This is where things might become tricky. Since there is only once view per model and probably multiple other models having foreign keys to that one, you will need to get the
to_field from the relation or pass it as a GET parameter. In the latter case you should definitely validate the field to avoid leakage as Tim said earlier.
Anyhow, just get started and let me know if you hit an obstacle and need help :)
Best
-Joe
comment:5 Changed 2 years ago by
I wish to take this up.
comment:6 Changed 2 years ago by
comment:7 Changed 2 years ago by
Cool, let me know when you have patch. I am happy to review it :)
comment:8 Changed 23 months ago by
Is there any progress?
comment:9 follow-up: 10 Changed 22 months ago by
I've put together a bug fix for this.
Note I've been able to update all unit tests and create some new ones, however I haven't been able to leverage
to_field_allowed to prevent data leaks. I've tried to implement it (see here), however I can't get it to play nicely with the unit tests. When uncommented, the
id_field isn't properly being considered as to_field_allowed. I'm not familiar with this function, so could use some help troubleshooting.
comment:10 follow-up: 12 Changed 22 months ago by
Replying to Constantino Schillebeeckx:
I've put together a bug fix for this.
Note I've been able to update all unit tests and create some new ones, however I haven't been able to leverage
to_field_allowedto prevent data leaks. I've tried to implement it (see here), however I can't get it to play nicely with the unit tests. When uncommented, the
id_fieldisn't properly being considered as to_field_allowed. I'm not familiar with this function, so could use some help troubleshooting..
Best
-Joe
comment:11 Changed 22 months ago by
comment:12 Changed 22 months ago by
Replying to Johannes Hoppe:.
All set:
comment:13 Changed 22 months ago by
comment:14 Changed 17 months ago by
This has been inactive and I stumbled upon a solution while fixing another issue, I am assigning this to me.
comment:15 Changed 17 months ago by
comment:16 Changed 16 months ago by
comment:17 Changed 9 months ago by
comment:18 Changed 9 months ago by
Hi schwtyl,
thanks for your commitment. I must admit, I did push this issue aside a bit in favor of some other patches I have been working on.
Sadly, you did base your work on GH@10494 not on GH@11026, therefore my critique stays the same. It will only solve this issue but make it harder to solve #29138 which is equally important.
Might I also suggest next time, to comment on the ticket first, before fixing someone else's patch. I might save you some trouble.
Best
-Joe
comment:19 Changed 9 months ago by
Joe,
Constantino developed this patch for the project that I succeeded him on, so I'm hardly a random bystander. Everytime our project builds for the past year, it pulls it from this GitHub commit, and I'd love to see that eventually go back to using an official Django release again. I'm not sure how I'd know about PR 11026, since you didn't reference it here at all. This issue *is* #29138 - perhaps you mean #29010? The patch fixes this issue - can you provide feedback as to what can be done for a fix to be accepted? What specifically *needs improvement*?
Thanks,
Tyler
comment:20 Changed 9 months ago by
Hey Tyler,
thanks for your swift response. I see, I wasn't ware of your background, but also didn't mean to offend you. I my comment really was only trying to save you some effort. I know first hand that the Django development process can be lengthy and demotivating.
Anyhow, I see where you are coming from, and yes, you shouldn't absolutely use a mainstream Django version, to avoid security vulnerabilities.
Might I suggest to use django-select2 until then? I developed and maintain that package, it was the blue print for Django's autocomplete field implementation, which I implemented too. It does work slightly different, but it will at least allow you to jump to a stable Django release.
Regarding the patch: As the initial developer of this feature I have an elevated interest to solve all bugs, not just this one. That aside – IMHO – the best solution actually happens to fix both problems.
I will do a review of your PR, to give you a bit better feeling of the work that would need to be done on the PR to fix only this issue. I'll let you decide, if you still think it's worth the effort given that all changes would need to be revered with the fix for #29138.
Best
-Joe
comment:21 Changed 9 months ago by
Scratch everything I said. I had another look at #29138. It will require a lot of work and probably some mailing list discussion. Therefore, I do believe it's worth the effort to fix them separately even if it involves revering some parts later.
I already did a first review of your patch. I'll give it another go tomorrow. I believe we can push this over the finish line.
comment:22 Changed 9 months ago by
Thanks Joe!
I appreciate the background you were able to fill in, and I wasn't aware that you were so involved in the implementation of Django Admin's autocomplete feature. I am new to Django development, though I am interested in helping take this forward if I can be of help, even if it takes a while. I don't have any emotional attachment to this current patch, other than having used it in production for this past year (yikes!) - so if you think a different direction is preferable, I won't be offended. I'll take a look at your feedback shortly.
Regardless, you make a good point - I am working to transition our code back to a stable Django release (I inherited this setup when I started earlier this year).
-Tyler
comment:23 Changed 9 months ago by
We have been using this in Django 2.1.5. As the to_field contains a code, it is a valid integer, and no error is produced. This issue should be marked as DATA LOSS and be urgently addressed.
I think the correct solution is to modify
AutocompleteJsonViewto return the
to_fieldvalue instead of the primary key. To prevent improper data leakage,
ModelAdmin.to_field_allowed()should be called.
|
https://code.djangoproject.com/ticket/29138
|
CC-MAIN-2020-34
|
refinedweb
| 1,527
| 72.46
|
3d plot with restricted range
Can someone please give me a helping hand with plot3d?
I want to plot the GammaCondor-function:
def condor(x, y): e = gamma(y+1) a = gamma(1/2*y-1/2*x+1/2) b = gamma(-1/2*x+1/2*y+1) c = gamma(1/2*x+1/2*y+1/2) d = gamma(1/2*x+1/2*y+1) alpha = cos(pi*(y-x)) beta = cos(pi*(y+x)) return log(e)+log(a)*((alpha-1)/2)+log(b)*((-alpha-1)/2) \ +log(c)*((beta-1)/2)+log(d)*((-beta-1)/2)
With Maple this is easy (note the restriction for x):
plot3d(condor(x, y), x = -y..y, y = 0..8, orientation = [-145, -105], axes = BOXED, grid = [64,64]);
What this looks like can be seen here:...
Or as a dynamic pdf with an Adobe Reader (v10 or later):
var('x, y') g = plot3d(condor(x, y), (x, -y, y), (y, 0, 8)) show(g)
gives a TypeError.
var('x, y') g = plot3d(condor(x, y), (x, -8, 8), (y, 0, 8)) show(g)
shows an empty frame. Plotting was tried on cloud.sagemath.
|
https://ask.sagemath.org/question/10579/3d-plot-with-restricted-range/
|
CC-MAIN-2019-30
|
refinedweb
| 194
| 71.04
|
Python Programming, news on the Voidspace Python Projects and all things techie.
A Little Bit of Python Episode 4: A Pre-PyCon Special
A Little Bit of Python is an occasional podcast on Python related topics with myself, Brett Cannon, Jesse Noller, Steve Holden and Andrew Kuchling.
The website is in progress and apparently nearly ready, thanks to Jesse and various other people who we will thank as soon as it is done. In the meantime, episode 4 is out. PyCon 2010 is only ten days away and it is the highlight of the year for many of us in the Python community. This episode is a pre-PyCon special where we discuss some of the things that will be happening at the conference and how to get the best out of it.
General links for the podcast feeds and a webpage with an embedded flash player:
- A Little Bit of Python mp3 rss feed
- A Little Bit of Python m4a rss feed
- Podcast homepage (currently redirecting to a temporary home)'ve only released episode one so far).
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2010-02-08 00:13:48 | |
Categories: Python, Fun Tags: podcast, pycon, bitofpython, conference
ConfigObj 4.7.1 (and how to test warnings)
I hate doing releases. I haven't managed to automate the whole process (I should probably work on that), although setup.py sdist upload certainly helps. Anyway, the short version of the story (and the real reason I hate releases) is that was a bug in ConfigObj 4.7.0. 4.7.1 is a brown paper bag release to fix the bug
The bug was an error in the way I had setup the deprecation warning for the obsolete options dictionary in the ConfigObj constructor. The bug only affects you if you were still using the options dictionary to configure ConfigObj instances.
If you've never heard of ConfigObj now is an ideal time to try it out.
The reason the bug happened is because not only did I not even try the deprecation warning to make sure it worked, let alone add a test for it. It turns out that adding a test for warnings is easy using the catch_warnings context manager, new in Python 2.6.
from warnings import catch_warnings with catch_warnings(record=True) as log: ConfigObj(options={}) # unpack the only member of log warning, = log self.assertEqual(warning.category, DeprecationWarning)
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2010-02-07 23:52:41 | |
Categories: Python, Projects Tags: release, configobj, configuration, ini
Archives
This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License.
Counter...
|
http://www.voidspace.org.uk/python/weblog/arch_d7_2010_02_06.shtml
|
CC-MAIN-2013-20
|
refinedweb
| 453
| 69.82
|
Py2exe
Suppose you don’t know if your Windows users have Python installed — or you don’t know which version of Python they’ve got installed. You can still distribute a Python program to them using py2exe.
Here’s how:
- Make sure that you yourself have Python installed, and that you know which version you have installed (
python -vwill tell you).
- Visit to get an overview of what Py2exe does.
- Head over to the SourceForge and download the version of Py2exe which matches your version of Python
- Install Py2exe
- Look though the code samples to find something which matches what you need to do
- And do it!
Using Py2exe
Suppose you want to distibute my solution to the
8 Queens puzzle.
You need to create a file called
setup.py which looks like:
from distutils.core import setup import py2exe setup ( version = "1.0", name = "N Queens puzzle solver", console = ["queens.py"] )
Then run:
python setup.py py2exe
The executable and DLLs appear in the
dist subdirectory of your
current working directory.
|
http://wordaligned.org/articles/py2exe
|
CC-MAIN-2016-50
|
refinedweb
| 172
| 83.05
|
The following is a list of classes, methods, and enumerated types available in the Google Picker API. All of these elements belong to the namespace
google.picker.* There are two types of classes and enumerated types: those which are used to build and configure the Google Picker, and those which are returned by the Google Picker once the user has selected Google Picker example for typical use.
ResourceId
ResourceId is a static class used to generate resource IDs suitable for the Google Documents List API.
View
View is the abstract base class for the various View classes, such as DocsView.
ViewGroup
ViewGroup is a visual grouping of views. The root item of the ViewGroup itself must be a
View.
ViewId
ViewId is an enumerated type, used for constructing
View and
ViewGroup objects.
Callback Types
The following enumerated types are found in callback data returned by the Google Picker API.
Action
Action is an enumerated type representing the action taken by the user to dismiss the dialog. This value is in the Response.ACTION field in the callback data.
Document
Document is an enumerated type used to convey information about a specific selected item. Only fields which are relevant to the selected item are returned. This value is in the Response.DOCUMENTS field in the callback data.
Response
Response is an enumerated type used to convey information about the user's selected items.
service-id
ServiceId is an enumerated type used to describe the service the item was selected from. This value is in the Document.SERVICE_ID field of the selected Document.
thumbnail
Thumbnail is an enumerated type used to convey information about a selected photo or video. This value can be found in the Document.THUMBNAILS field of a selected Document.
type
Type is an enumerated type used to catorgize the selected item. This value can be found in the Document.TYPE field of a selected Document.
|
https://developers.google.com/picker/docs/reference?authuser=1
|
CC-MAIN-2022-05
|
refinedweb
| 317
| 59.4
|
Answered by:
The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine.
- I just installed Visual Studio 2005 today and I'm already getting an error. This code worked fine on 2003. I am running Windows XP 64 so I definitely have the latest MDAC. Anyone know why I would get this error?
DimstrConnect As String = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & GetWorkingPath() & "db.mdb;Persist Security Info=False;"
conn =New OleDbConnection(strConnect)
conn.Open()
Question
Answers
There is not a 64 bit version of jet that is why you get that error. To force your app to use the 32 bit change the target cpu to x86 in the advanced compiler options.
All replies
There is not a 64 bit version of jet that is why you get that error. To force your app to use the 32 bit change the target cpu to x86 in the advanced compiler options.
- I'm getting the same problem, but when I do the x86 option it doesn't help, either in C# or VB. I've run CompChecker and got the following, everything is cool:
<cc>
<releases>
<release name="MDAC 2.1 SP2"/>
<release name="MDAC 2.5"/>
<release name="MDAC 2.5 SP1"/>
<release name="MDAC 2.5 SP2"/>
<release name="MDAC 2.5 SP3" />
<release name="MDAC 2.6 RTM" />
<release name="MDAC 2.6 SP1" />
<release name="MDAC 2.6 SP2" />
<release name="MDAC 2.6 SP2 Refresh" />
<release name="MDAC 2.7 RTM" />
<release name="MDAC 2.7 Refresh" />
<release name="MDAC 2.7 SP1" />
<release name="MDAC 2.7 SP1 Refresh" />
<release name="MDAC 2.8 RTM" />
<release name="MDAC 2.7 SP1 on Windows XP SP1" />
<release name="MDAC 2.8 SP1 on Windows XP SP2" />
<release name="MDAC 2.8 SP2 on Windows Server 2003 SP1" />
</releases>
</cc>
I'm out of ideas. Every search I've done for this topic has not panned out. Help please!
Thanks,
Dave Scofield
I found this information after a lengthy search:
I used regsvr32 with the five dll's under the Jet 4.0 OLD DB provider and it worked for me.
Larry WWW
These files were not on my machine so I copied them from a W2K3 box.
Some would not register:
msjint40
msjter40
mswstr10
Error: "The module ... was loaded but the entry-point DLLRegisterServer was not found."
If I change to x86 I get :
Could not load file or assembly 'ClubsWS' or one of its dependencies. An attempt was made to load a program with an incorrect format.
Changing back to Any CPU and the service page displays byt my method fails when I try to access the mdb.
You can't install Jet this way. Use the download instead:
The only way it will run under 64-bit Vista is from within a 32-bit application process. So essentially your application would have to be a 32-bit application which runs under the WOW64 subsystem. You cannot develop a 64-bit application that uses Jet.
If you want, you can try installing it through Windows Update. I'm not running any of the 64-bit operating systems so I can't say for certain whether installation will work and I don't know for certain where the legacy components are installed in the 64-bit operating systems.
Project Properties...Compile tab...Advanced Compile Options button...Target CPU dropdown.
As was previously mentioned, there is currently no 64-bit database engine for an Access database. As a result, you're limited to a 32-bit application in this instance.
- Proposed as answer by Paras Parmar Friday, January 28, 2011 4:00 PM
- Proposed as answer by Vito DeCarlo Wednesday, December 31, 2008 4:00 PM
- I write my web-page on ASP.NET on MS Visual Studio 2005 (Standart). When I try to launch my page on Development Server all is fine. But when I try to access it through IIS 7 I get the error in discussion. I am running Vista 64. How fix that?
Thank you!
>Project Properties...Compile tab...Advanced Compile Options button...Target CPU dropdown.
I use MSVS 2005 Standart and dont see Compile tab in Project Properties for Web-site. Such the option is in project options for usual CLR application. Sorry if the question is stupid but it is my first asp.net site on MSVS.
- I want to make sure I 100% understand the situation:
If we use an Access database and try to open it using the Microsoft Jet 4.0 connector, it will work on a 64bit machine ONLY if we make the entire application work in 32-bit mode.
What other choices do we have for reading an Access database? We use an Access DB to store system settings that can't be changed by the user, and that is the easiest way we have found to do it (our access DB is now about 6 meg in size).
Since the OS's are "all" going to 64bit, does that mean Microsoft is not updating Access to a 64bit version? It doesn't make sense for me, as an application developer, to limit my program to only 32bit when I am using the .NET framework (version 2.0) and SQL Server, both of which can be (are) 64bit applications.
Help me understand this please.
Correct.
There is no other option and currently no plans for a 64-bit library. At this point I do not know what will become of Microsoft Access when Office moves to 64-bit. From my perspective it would appear that Microsoft is encouraging developers to use SQL Server technology. SQL Server Compact Edition would probably be a suitable small footprint, embedded solution for your implementation.
I'm not aware of a package for Vista. The latest version is included with Vista so unless the components are updated I wouldn't anticipate a version specifically for Vista.
You could try installing the latest version for Windows XP on 64-bit Vista. I don't have a 64-bit Vista configuration so I don't know whether the Jet OLEDB provider is installed for 32-bit compatibility.
I am using the full version. Actually when I use a Console App, the option for target platform does appear, but in the 2nd tab which is the "Build" tab . For Web apps, the only tabs that are presented in VS.Net are References, Build, Accessibility, Start Options, MSBuild Options
thanx!!!
Below is the documentation for Web Site projects:
How to: Configure Projects to Target Platforms
Try creating a Deployment Project:
How to: Configure Projects to Target Platforms
Hi,
I just had the same error and figured out that my connection string and path of the file had syntax errors. Correct one is:
"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:/xxxx/xxx.mdb"
now it works.
thanks
- Proposed as answer by Air-Thanida Tuesday, February 21, 2012 4:13 AM
Hi Paul,
I have few doubts. One of this is already a repeated qns.Could you plz clarify them. Still i have not even seen Vista OS, so sorry for basic qns.
1) Does Windows Vista OS is available in 2 different versions ( 32 Bit OS & 64 Bit OS) ? Or both are in same Version?
2) As said by you in one of the previous posts, that Microsoft Jet 4.0 is available only for 32 Bit Vista, then is it backward comapatible ( does that even support Win 2K,2k2,2k3,XP versions also )?
3) Data connectivity components which are available for Office 2k7 supports backward compatibility ( supporting from Win 2k to latest one) ?
4) Currently i have a stabilised application which had OLEDB Support for Win 2k, 2K2,XP. Now i would like my application to support Vista (which contains office 2007) also .
regards,
Mahesh
Regarding the Advanced Compiler settings, you may want to post your question to the Visual Basic IDE forum.
I am having the same issue. I have a web site application that allows a user to upload an Excel file that is processed like a database using the Jet OLEDB and it works fine in a 32-bit environment but not in a 64-bit environment. The web application consists of 3 projects - the website and two class libraries. (Everything is written ib C#) The "CPU target" is listed on the class library projects but NOT on the website project. So how do I make my website run in 32-bit mode? (Windows xp-pro 64) OR is there another way that I can process an Excel spreadsheet like a database in my web application? (It is not an option to run VSTO on the client machine to connect to the database.)
- I'm having the same problem (that is, "The 'Microsoft.Jet.OLEDB.4.0' provider is not registered...etc").
Thing is, I'm not using VB2005, I'm coding in C#, in Notepad, using an IIS server, on XP x64. Hence, I can't just change the VB options, or use a compile flag (or can I? I'm not incredibly familiar with IIS).
Is there any way to get round this?
Thanks!
- I just found this and thought I'd share it. I'm using IIS7 on a Vista x64 box. On IIS7, there's an option to run your app in 32-bit mode. In the Advanced Options under the Application Pools where your app is, there's an option to Enable 32-bit Applications.
- Proposed as answer by WizardintheWoods Tuesday, March 01, 2011 3:49 AM
- An alternate solution for some may be to use a csv file type instead of an xls file type.
I originally uploaded csv files using simple procedures with Streamreader, RegEx, and other methods to parse a csv file. These were unsatisfactory, prone to errors due to inconsistencies in csv structure and content.
For a few years I used the ODBC text file driver. Then Microsoft depreciated ODBC. I used the Ole DB data provider until my server upgrade to x64. Both ODBC and Ole DB data providers do not work on x64, and I cannot alter to use IIS as 32 bit because of co-existing with Exchange 2007, which requires IIS 6.0 run only in x64 mode.
So, for my ASP.NET needs, I first upload csv files to the user's temp folder. Then my procedures check for the file existence, type, size, illegal file names and unexpected columns. At that point, I use a custom class csv reader that does all the basic csv reader work that needs to be done. Neither ODBC nor Ole Db is required.
If you haven’t time to write-your-own, I recommend the cached version of the CSV reader by Sebastien Lorion, downloaded from
Hi all, I was having the same problem as you running a WebApp that uses Microsoft Jet for importing some data from Excel, I was working with Windows Vista 64 bits after trying everything I found a workaround:
In the IIS(7), search for the application pool that supports your Web Application rigth click on it and select the option "Set Application Pool Defaults"...then find the Option Generals->Enable 32-bit Applications set it to true.
And that's it! Microsoft Jet is working again.
- Proposed as answer by Matt Brunell Tuesday, March 03, 2009 3:52 PM
I currently have 10 projects in my application.
one of them is trying to use jet on 64bit windows.
My main project includes this sub project and many other projects.
Are you saying I have to change all my projects to target 32 bit, and can never have a 64 bit project because one of them happens to use Jet for a specific minor purpose?
Because I changed the specific project to 32 bit, now I can't load the program at all because of incompatability between projects.
Is there a way for my main 64 bit project to load and run a 32 bit dll?
You cannot load a 32-bit component into a 64-bit process space. Unless Microsoft provides a thunking mechanism or a 64-bit version of Jet (not likely), I'm afraid we're out of luck.
The only current alternative is to create linked servers to tables in Microsoft Access from SQL Server and reference these tables through SQL Server. This implementation is similar to the concept of linked tables in Microsoft Access.
- Ken,
I am getting the same message. However, my system is Windows XP Home Edition. I am using MS Visual Studio 2005 Std Ed. Here is the code I am using: Dim cn As New OleDbConnection("Provider=Microsoft,Jet.OLEDB.4.0;Data Source=c:\"path".mdb;") I have re-loaded MS Visual Studio 2005. I have tried downloading MS Jet OLEDB.4.0 and get a message that says my current version is newer than the download version. I have run the Serv32(? I don't remember the exact serv name right now) option to re-register the oledb files and it tells me registration successful. If I use the data connection wizard in Visual Studio, and test the connection, it tells me that the connection is successful. I have not figured out how to use the xsd created by the connection wizard in my code. The pecular thing about this is that I took my code to a friend and ran it on his computer running XP Home Ed.. We downloaded VB Express. The code worked. I downloaded VB Express and when I run the code I continue to get the "not Registered" error message. Do you have any advice/suggestions on how to proceed to make this work? It seems to me that there is something different in the settings on my machine that is not allowing the code to work. Which settings, I don't have a clue.
Thank you for any assistance you may be able to provide.
Onedoor
(quote)
The only current alternative is to create linked servers to tables in Microsoft Access from SQL Server and reference these tables through SQL Server. This implementation is similar to the concept of linked tables in Microsoft Access.
(end quote)
Paul, that's exactly what we want to do and what isn't working for us. What is the right way to create a linked server to read and write Access .MDB files from a 64 bit SQL Server 2005 via ADO?
We are interoperating with another vendor's Access application and cannot just bring the MDB files into SQL Server for storage.
Thanks
Something really quick for those who wan to configure Visual Studio Web Express to execute under X86 runtime.
Replace your configuration tag in your web.config with this:
<configuration xmlns="">
It work for me.
Not really know all they impact.
Need to test a bit.
Please let me know if it's work for you.
Eric.
Oups, an i forgot the most important part....
I Switch to true Enable 32-Bit Applicaitons into de Advanced Setting of my DefaultAppPool in Internet Information Services Manager.
Create a new pool if you have 64bit depending code else where.
Voila !
Eric.
The other solution is to move access database to SQL Express. I just did that on Windows Server 2008 64bit, SQL 2005 Express and used Visual Studio 2008 standard. Worked fine. It would nice if someone would write the 64bit for access, although the conversion of Access to SQL Express was so simple I can see why it is not a priority.
hi,
i have same problem but still no any solution Work What i did is ..
i build a website project under vs2005. i worked on windows server 2003 64 bit. i have to upload a excel sheet and import data from excel to SQl. using MS.jet.oledb.4.0. this work at preview. but gives error when i run this website project to MOSS site.
i use the connectioon string.
"Provider=Microsoft.Jet.OLEDB.4.0;"+ "Data Source= ".xls;" + "Extended Properties=Excel 12.0; HDR=Yes;IMEX=1";
OleDbConnection conn =new OleDbConnection(); conn.ConnectionString = strConn;
try
(
conn.Open();
}
catch(Exception Ex)
{
// here error Comes
}
when conn.open exceute it's give Error.
'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine
Please Give Me Solution it's very urgent.
Thanks a Lot,
but i have a doubt that our site is running on 64 bit server(64 bit MOSS). if i config IIS (6.0) again to 32 bit.which is now configured for 64 bit. then there may be some problem on MOSS site Like slow performance , low optimization.
and may be some problem regarding all web service and web projects.
SO what other alternative are there.
i have a link it is sufficent ?
Regards..
Ankit jain
- I'm getting the same error as in the OP. Here is my code:
Imports System.Data.OleDB
Public Class Form1
Dim myConnection As New OleDbConnection
Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
myConnection.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\Users\Michael\Desktop\CTB Project\CTB Project\CTB Project\CTB Database.mdb"
Dim myDataAdapter As New OleDbDataAdapter("Select Venue_Name From Concert_T Where 'Venue_ID = Purchase.VenueIDValue.Text'", myConnection)
Dim myDataSet As New DataSet
myDataAdapter.Fill(myDataSet, "Concert_T")
DataGridView1.DataSource = myDataSet.Tables("Concert_T")
End Sub
End Class
I'm getting an error on the myDataAdapter.Fill(myDataSet, "Concert_T") line.
Can someone tell my what my problem is please? I'm a total novice trying to make something work.
In the i386 folder find msjetoledb40.dll. I copied this file into the Microsoft Visual Basic 2005 Express Edition folder in the Program Files for the MS Visual Studio folder. I'm not sure copying this file was necessary. What is necessary is to run the following command. regsvr32 msjetoledb40.dll. This causes this dll to be registered.
- Hi!
I have tried all the above mentioned ways to solve the problem "The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine" but none of them have worked. I have a 64 bit server, I tried with cangng my code to 32 bit.. it failed. alter that I tried with solution mentioned in "", even this one failed.
Does any one have a concrete solution to the problem.
- Hi,
I had the same problem and I found this
and it's fix the problem
hope it will help you
Claude
- Proposed as answer by Mike Vickers Friday, April 30, 2010 10:52 PM
- Ankit,If i understand you correctly,Do you have 64 but application on 64 bit OS ?i am planning to migrate to 64 bit OS server and all the apps are also in 64 bit. Some portion of code tries to import the excel file and i get the familiar error mention in the chain.Is there any solution ?RegardsUmesh
- Proposed as answer by Mike Vickers Friday, April 30, 2010 10:52 PM
- Hi Ankit,
Could you post the code how to take the object of excel and do it?
My requirement is to read an excel file in a specific path and process it. My code as follows:
private
{
OleDbConnection ExcelConnection = new OleDbConnection(@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + pathName + ";Extended Properties=Text;");
OleDbCommand ExcelCommand = new OleDbCommand(@"SELECT * FROM " + fileName, ExcelConnection);
OleDbDataAdapter ExcelAdapter = new OleDbDataAdapter(ExcelCommand);
ExcelConnection.Open();
DataSet ExcelDataSet = new DataSet();
ExcelAdapter.Fill(ExcelDataSet);
ExcelConnection.Close();
return ExcelDataSet;
}
It is giving out "The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine." error. How can I solve it?
Thanks,
Sharmin
- Edited by Sharmin Jose Monday, June 08, 2009 4:31 PM Formatting
- Yes You can use this code to read excel file from your C# Code..
there is a button in dot net page ...and it's a code behind...
protected void btnAttachFile1_Click(object sender, EventArgs e)
{
try
{
DataTable m_TaxData = new DataTable("TaxData");
DataColumn dcSource = new DataColumn("DataSource");
DataColumn dcDest = new DataColumn("Destination");
DataColumn dcLen = new DataColumn("Length");
DataColumn dcID = new DataColumn("ID");
m_TaxData.Columns.Add(dcID);
m_TaxData.Columns.Add(dcSource);
m_TaxData.Columns.Add(dcLen);
m_TaxData.Columns.Add(dcDest);
ViewState["TaxData"] = m_TaxData;
ddlSourceFields.Items.Clear();
btnRemoveSelected.Visible = false;
if (FileInput1.HasFile)
{
lblMessage.Visible = false;
string extn = FileInput1.FileName;
if (ddlDataSource1.Text == "XLS")
{
ddlSheetName.Items.Clear();
FileInfo fiinfo = GetFileOnServerInfo();
string path = fiinfo.FullName;
//Session["FilePath"] = fiinfo.FullName;
ViewState["FileName"] = fiinfo.FullName;
string strConn = "Provider=Microsoft.Jet.OLEDB.4.0;" +
"Data Source=" + path + " ;" +
"Extended Properties=Excel 8.0;";
OleDbConnection conn = new OleDbConnection(strConn);
conn.Open();
DataTable dt1 = conn.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, new object[] { null, null, null, "Table" });
int count = dt1.Rows.Count;
for (int count1 = 0; count1 <= dt1.Rows.Count - 1; ++count1)
{
//ddlSheetName.Items.Add(dt1.Rows[count1]["TABLE_NAME"].ToString ());
ddlSheetName.Items.Add(dt1.Rows[1]["TABLE_NAME"].ToString());
}
string tableName = dt1.Rows[1]["TABLE_NAME"].ToString();
lblSheetNames.Visible = true;
// ddlSheetName.Visible = true;
btnGetFields.Visible = true;
conn.Close();
//new codes
string str_con = "SELECT * FROM " + '[' + tableName + ']' + "";
conn.Open();
OleDbDataAdapter myCommand = new OleDbDataAdapter(str_con, conn);
}
if (FileInput1.HasFile)
txtFileName.Text = FileInput1.FileName;
ExecuteConnection con = new ExecuteConnection();
}
else
{
WebMsgBox.Show("No file is selected");
return;
}
dt = null;
ViewState["Table"] = null;
grdLoaderRules.DataSource = dt;
grdLoaderRules.DataBind();
}
catch (Exception exx)
{
}
}
public FileInfo GetFileOnServerInfo()
{
try
{
string fp = Server.MapPath("");
fp = fp + "\\\\LoaderMap";
if (Directory.Exists(fp) == false)
{
DirectoryInfo dc;
dc = Directory.CreateDirectory(fp);
dc = null;
}
string Time = System.DateTime.Now.Ticks.ToString();
fp = fp + "\\\\" + Time + (FileInput1.FileName) + "";
if (System.IO.File.Exists(fp) == true)
{
File.Delete(fp);
}
FileInput1.SaveAs(fp);
FileInfo FICSV = new FileInfo(fp);
if (FICSV.Extension == ".xml" | FICSV.Extension == ".xls" | FICSV.Extension == ".csv" | FICSV.Extension == ".txt")
{
}
//Do Nothing
else
{
FICSV = null;
}
return FICSV;
}
catch (Exception ex)
{
throw ex;
}
}
Regards
Ankit Jain
- Proposed as answer by Ankit jain 1101 Tuesday, June 30, 2009 12:23 PM
Yes You can use this code to read excel file from your C# Code..Didn't work for me either. I still get the "The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine" error.
there is a button in dot net page ...and it's a code behind...
protected void btnAttachFile1_Click(object sender, EventArgs e)
{
... }Setting the IIS 7 to "Enable 32 bit" doesn't work for me either, as I have other assemblies that HAVE to run in 64bit, and if you set your IIS to 32bit, why have a 64bit server in the first place anyway?Forcing the assembly to compile to x86, doesn't work either, as I have other assemblies again using this assembly, which doesn't work as the 32bit assembly isn't recognized...What do I do here... can you somehow specify a seperate AppPool for a specific location in the website? Then I could have this particular part of my site running in 32bit, as it will run in another AppPool...I guess this isn't possible, but I don't have anymore ideas.Hope someone can help!
- laumania said "Didn't work for me either. I still get the "The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine" error."
Right, I am in the same boat and I think we need to get the jet ole drivers on there first. I am running vista sp2, 64bit and my machine does not have any jet drivers that I could find. I deployed a vs 2005, 2.0 framework desktop app to this machine and cant get past the provider error. I have tried all of the tricks listed in this thread.
- For those people with a Visual Studio Express Edition and want to compile in x86 (since VS Express doesn't expose the option) do the following:
This worked on 64bit Vista using C# Express 2008 for me. Good luck.
- In Windows Explorer, right-click your project file (*.csproj, *.vbproj, etc..) and Open With > Notepad (the project file is an XML document)
- Find the section <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' "> and <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
- Find (or add) the element <PlatformTarget>x86</PlatformTarget>
- Save and close Notepad.
- Reload your project and compile.
- Vista 64-bits SP1, VS08 SP1, .NET 3.5 SP1, C#;
I was getting the same error but I changed in the Project Explorer -> Properties -> Build -> Platform Target: x86 and both the oledb using jet 4.0, and the ODBC connections work without any problems. I'm testing using both type of connections against the Nwind.mdb examples. Neither of my connections works to access a Nwind.accdb
- I know this is an old post, and the JET is deprecated - but I can't find a better solution for reading Acces mdb files, and the problem on this thread only concerns a few W7 x64 users.
Switching to force x86 compilation has solved the problem for most Windows 7 x64 users of my .NET desktop application, but not all. The remaining problems are sometimes on fresh W7 installs, sometimes on upgrades, and none of the downloads from or the info on has been of any help to anyone. No mention is made of Windows 7 anywhere, and the downloads will not install.
Windows Update doesn't help either, and none of the many solutions I've tried suggesting have solved this either - such as installing Access, using persistent=true in the connection string, reinstalling NET. 1.1, 2.0, installing MACD <= 2.5, clean install of MACD plus the JET 4 sp3 version 8...
The only solution I have not yet tried is LarryWWW's suggestion about reregistering the 5 dll's; hopefully I'll soon have someone with this issue who knows how to browse from a cmd prompt, as I don't yet have access to a Windows 7 x64 machine - and as I said, now my app is compiled for x86 the majority of W7 x64 users don't have the problem...
The machines in question DO have msjet40.dll version 4.00.9756.0 installed in wOw folder (a version not even mentioned on the above MS Kb articles), however the error persists.
Do IIS settings affect the way an ordinary desktop .NET app runs?
Is this is likely to be a rights problem, or a MACD or Jet Version problem?
Or will the re-registering trick from work for sure?
Thank you for any help,
Neil
- Paul, I'm sorry but throughout this thread you keep saying there is no 64-bit Jet OLEDB provider and that is simply not true. Jet is included in Enterprise Server, which is a 64-bit-only system, and it runs in native 64-bit mode.
For some reason, despite a lot of hollering, the driver included in Enterprise Server is not and will not be included in the 64-bit PC operating systems. It is odd that it was intentionally excluded since backwards compatibility seems to be a singular Microsoft obesssion...
Technically you're right; but absolutely? Not quite.
Cheers!
Tinker
- This really helped me. It definately got my application working.
My question is: is there any performance issues with running the application in WOW64 mode? Will there be any big performance losses on a web application, compared to running on 64-bit mode?
regards.
- Thank you ... you saved my project. I have a delivery deadline in two days, and I had left the reporting for the end thinking it would be easy to connect to Excel--and then I encountered this silly error with the misleading error message. Thank you again. (PS: As a more material gesture of saying thank you, I commit to trolling the forum regularly, and responding to others' pleas for help.)
- The idea of using SQL Server as the solution does not work for us, as over the last 10 years we have installed numerous editions and never had a stable installation. OK, maybe one time on NT4 back in the 90's. Access just works and does not require an on-site MVP, although it may need a compact and repair once in a while. So if Access is not available on 64 bit, we will seek another, workable solution, probably from a 3rd party.
We did get our application (the one using Access) working by compiling x86 all the way down the stack of dependent libraries. It was a headache, but it works, though now we are restricted to 3GB of RAM, so we fixed one problem and created another, unworkable one. After we pass the current deadline, we'll go find a workable 64 bit database and go back to "Any CPU".
FYI: We are on Windows 7, 64 bit.
Thanks.
- There is already a 64-bit version of jet/ace driver by MS available (with office 2010 beta):
- Proposed as answer by TechVsLife2 Thursday, February 25, 2010 5:02 AM
- Well, I just downloaded it and instead of "Jet 4.0 not registerd on local machine" I get "Microsoft.ACE.OLEDB.14.0 nor registered on localmachine."
No errors during install. I'm using this as my connection string
AccessConnection =
New OleDbConnection("Provider=Microsoft.ACE.OLEDB.14.0;Data Source=C:\Users\Peter\Documents\Visual Studio 2008\Projects\nodesnamespace\nodesnamespace\nodes.mdb")
I've tried to set my app as x86 but that didn't help either.
Any one know what the file name(s) is so I can try to manually register them? How do you register a 64 bit dll?
Any help would be apreciated.
Old Programmer
- Use Provider=Microsoft.ACE.OLEDB.12.0 instead. This is a bug in beta version. Read details in my blog.
Paul Shkurikhin blog.sharepointalist.com
- Proposed as answer by TechVsLife2 Thursday, February 25, 2010 5:01 AM
Hi,
to avoid this kind of compatibility issues and subtle problems that takes days to figure out, I recommend you take a look at this Excel .NET library.
Written in pure managed code, it doesn't use Excel Automation , has its own parsing engine which makes it very fast.
Object model is very intuitive and easy to use. Methods like DataTable to Excel or Excel to DataTable are very helpful.
Here is a Excel C# code snippet how to export DataSet to Excel :
//");
Hi.
I got some similar problem .
My pc:Vista 64-bits SP1, VS08, .NET 3.5 , C#;
The project was made on a XP 32-bit and i need to put it on another pc with Vista but it doesn't work.
the code i use :com = new SqlCommand("INSERT INTO dbo.Profesori SELECT * FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0','Excel 8.0;Database=" + textBox_Filepath.Text + ";HDR=YES;', 'SELECT * from [Sheet1$]')", con);
I tried some solutions from thie post:
-i thought its the code even if on the XP pc it works and changed it but didn't fix it
-platform change but didnt work on x86 nor on x62
-the ISS(got 6.0) still nothing
-used the msjet40.dll transfer from system to system 32nothing
If anyone can help me i would appreciate a lot.thanks
- Since post is very long here, you need to be more specific about "similar problen". Do you receive any error? Did you install proper Jet OLEDB provider? You cannot just transfer msjet40.dll file, itr would be not enough. Also with Jet provider you must compile your .NET application in 32-bit mode, otherwise it will start in 64-bit mode on Vista and Jet will fail.
Val Mazur (MVP)
- sorry
the error is : The OLE DB provider "Microsoft.Jet.OLEDB.4.0;" has not been registered.
About the install since i found the msjet40.dll i thought its installed already plus most sites say that vista got it installed.
i tried setting the debug on x86 but it didn't let me set a platform
Sounds like Jet is not installed. You can find installation for it here
But keep in mind that Jet does not work if application runs in 64-bit mode. If you need to run it in 64-bit mode, you need to use ACE OLEDB provider which you could find here
Another option is to use ExcelReader component from my website. It works in both 32 and 64 bit modes and native to .NET
Val Mazur (MVP)
- Proposed as answer by Chandra Prakash Bitra Wednesday, August 04, 2010 7:57 AM
Ray,
Thanks so much as I am running IIS7 on a Windows 2008 server. I am using Visual Studio 2008 to develop the website. I have serveral websites running on this server. The data driven sites are using SQL Express 2005 in 64bit mode. I changed the website using the Microsoft jet oledb 4.0 to 32bit mode and the site started worked fine.
Very Helpful.
Dick
Hi,
I've an application with asp pages and I needed to migrate it on WS2K3 X64. In this application I've a connection to the mdb file.
<% Dim objConn, DSNTest Set objConn = Server.CreateObject("ADODB.Connection") Dim dbpath dbpath = Server.MapPath("..\database\let.mdb") DSNTest="Provider=Microsoft.Jet.OLEDB.4.0;Data Source="&dbpath&";User ID=;Password=;" objConn.open DSNtest %>
I had the message "Microsoft.Jet.OLEDB.4.0 provider is not registered on the local machine". I installed Microsoft Access Database Engine 2010 Redistributable
And change the code above into
<% Dim objConn, DSNTest Set objConn = Server.CreateObject("ADODB.Connection") Dim dbpath dbpath = Server.MapPath("..\database\let.mdb") DSNTest="Provider=Microsoft.ACE.OLEDB.12.0;Data Source="&dbpath&";User ID=;Password=;" objConn.open DSNtest %>
But now I've another error message: "Microsoft Access Database Engine error '80004005'
- it tells you what to do :) enjoy :) Method 1 Force the application to be compiled as a 32-bit application. To do this, follow the steps in the appropriate section. Steps for Microsoft Visual C# projects 1. In Solution Explorer, right-click the application, and then click Properties. 2. Click the Build tab. 3. In the Platform target list, click x86. 4. On the File menu, click Save Selected Items. Steps for Microsoft Visual Basic projects 1. In Solution Explorer, right-click the application, and then click Properties. 2. Click the Compile tab. 3. On the Compile tab, click Advanced Compile Options. 4. In the Advanced Compiler Settings dialog box, click x86 in the Target CPU list, and then click OK. 5. On the File menu, click Save Selected Items. Back to the top Method 2 Use Microsoft SQL Server Express Edition instead of Access. The benefits of using SQL Server Express Edition are as follows: * You can use the 64-bit version of SQL Native Client to connect to SQL Server Express Edition. SQL Native Client contains the SQL Server OLE DB provider and the SQL Server ODBC driver. * SQL Server Express Edition is free. * Only a 32-bit version of SQL Server Express Edition is available. However, you can still use the 64-bit version of SQL Native Client in the application to connect to SQL Server Express Edition. To obtain SQL Server 2005 Express Edition, visit the following Microsoft Developer Network (MSDN) Web site: ()
Project Properties...Compile tab...Advanced Compile Options button ...Target CPU dropdown.
It works. No wonder. I'm privileged to have this bit of wisdom. It completely resolves my problem.
Thank you Eric11 for the question and Paul P Clement IV for the answer.
Paras.
If you don't have an application that you are building and are using a 64bit OS then change your IIS server settings to handle 32 bit apps. Instructions below:
Adam Bruss
Connecting to MS Access on Windows 64-Bit
Folks, like many of you I struggled with the deprecation of JET 4.0 in Windows Vista/7 on 64-bit machines. A process running simply for years, ported to a faster machine simply stopped working with the error: 'Microsoft.Jet.OLEDB.4.0' provider is not registered. I was annoyed at the lack of notice from MS and lack of clear instructions. After tinkering I found the solution and I’ll post it first directly, with details after, for the benefit of those just as frustrated as I. The situation I describe is for VB, but may work in other code/environments.
1. Download and run AccessDatabaseEngine_x64.exe
()
2. Change the connection string in your code to:
Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ= <-db name and path here
There is no need to upgrade/replace JET or emulate 32-bit connections. You don’t have to buy a new version of MS Office/Access, Access does not even need to be installed on the machine, you just need the driver. No need to convert to SQL Express. My legacy code is now running flawlessly on Windows 7 quad 64-bit machine without any office apps installed.
When I first encountered the issue I tried to replace/upgrade JET but it was not available for 64-Bit and there were no plans to create it. Apparently, there is now a version released for 64-bit windows but you don’t need it. The MS Access Driver exists on older platforms as well and can be used instead of JET on 32-bit machines as well. However, there is one important detail concerning “*.accbd”, you must include this in the string or it wont work. Many examples posted on the web look like this: Driver={Microsoft Access Driver (*.mdb)} But it will produce an error in some cases indicating the DB and driver were not supplied in the string. *.accdb needs to be included as well.
Replace: Provider=Microsoft.Jet.OLEDB.4.0;Data Source=
With: Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=
Full pseudo code:
dbLocation = "C:\dbstore\myAccess.mdb"
Set objADO = CreateObject("ADODB.Connection")
objADO.Open "Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=" & dbLocation
All other SQL calls and objects are unchanged.
Have not tried yet in C++, or with Excel, or as a DNS location, but test it yourself.
Paul - thank you . This "Project Properties...Compile tab...Advanced Compile Options button...Target CPU dropdown" Works perfect :) You saved me some time, thank you
- Can reference.
- Thanks Sir its really helpful for me
- Edited by Suryaa Chandra Friday, March 02, 2012 10:14 AM
Can reference.
This worked for me...change IIS 7 app pools to allow 32-Bit applications.
- Proposed as answer by TinusTrotyl Thursday, February 07, 2013 10:46 PM
- Edited by TinusTrotyl Thursday, February 07, 2013 10:49 PM
There is not a 64 bit version of jet that is why you get that error. To force your app to use the 32 bit change the target cpu to x86 in the advanced compiler options.
Thanks a lot Ken!
In my case, my default project settings was pointing to x86 already - i just changed it to Any CPU and it worked:)
Computer is a box with magic called software.
There is not a 64 bit version of jet that is why you get that error. To force your app to use the 32 bit change the target cpu to x86 in the advanced compiler options.
Murf.
You will have to contact the vendor (Honeywell) of HUSS-M. Apparently it was designed to run 32-bit only since it is using Jet OLEDB, but was not compiled using the x86 option (force 32-bit only). It is trying to run 64-bit and there is no 64-bit version of Jet OLEDB. Looking at the documentation, here are the software requirements:
• .Net Framework 2.0
• Window XP Professional (recommended)
• Data access component such as Microsoft Data Access Components (MDAC) 2.8
The software would appear to be outdated.
Paul ~~~~ Microsoft MVP (Visual Basic)
Hello Paul,
Thanks for update.
Means there is no any other solution to do this???
As i having Windows 64 bit Operating System and having 8GB RAM.If i downgrade it to 32 bit operating system then system will utilize only 4GB RAM and which i don't want to do.
Can you please let me know any other Solution for this.
Unfortunately, I cannot fix this issue. The application needs to be recompiled so that it runs 32-bit on a 64-bit system. Since it was originally designed for Windows XP I suspect that they never considered that it would run on a 64-bit OS and did not select the correct compile option.
Did you check to see if there was a newer version of the software that fixes this issue?
Paul ~~~~ Microsoft MVP (Visual Basic)
Here is something you can try to change the app so that it will run 32-bit on 64-bit OS (without recompiling):
Paul ~~~~ Microsoft MVP (Visual Basic)
|
https://social.msdn.microsoft.com/Forums/en-US/45aa44dd-0e6f-42e5-a9d6-9be5c5c8fcd1/the-microsoftjetoledb40-provider-is-not-registered-on-the-local-machine?forum=adodotnetdataproviders
|
CC-MAIN-2016-22
|
refinedweb
| 6,770
| 67.04
|
[2.x][discontinued] Ext.ux.Andrie.Select (ComboBox with multiSelect)
[2.x][discontinued] Ext.ux.Andrie.Select (ComboBox with multiSelect)
March 26, 2008
This extension has been discontinued! A new extension with the same capabilities, but with cleaner and smarter code will be uploaded within the future days.
---------------
* This extension is merely the new-comer after Ext.ux.form.Select for Ext 1.x
Same features are available. To summarize: a ComboBox with multiple selection support.
Nothing much changed on the surface, other than switching the config property singleSelect to it's counter-property: multiSelect. The reason behind the change is to make it more logical when using Ext.DataView.
Since 0.3.6 - it also features history capabilities (the former HistoryComboBox)
Live DEMO is available here. Testcase is included in the attached ZIP file.
As always - looking forward to reactions on this!
TO DO (not in the very near future)
------
- add key search (Ext 1.x - SelectBox)
- add grouping capability (Ext 1.x - GroupComboBox)
Post Scriptum
- I sincerely apologize, but you won't be seeing "Mine is better because..." regarding this post. People can choose and make up their own mind. This is one reason why I switched to Ext.ux.Andrie namespace. I want to govern over Ext.ux.Select (sounds like community-work, consensus toward official release) no more than I want somebody else to use Ext.ux.Andrie namespace (personal work).
- I apologize for a second time because there was a "nice" delay since I promised to support Ext 2.x and the current time - release time. To be honest, this switch has only taken one day - today -, so it could have been released a long time ago.
- All in all, I'd like to thanks the Ext2 team - it was fairly easier to implement this on the new framework, than it was on Ext1.
[GMT 14:33 Nov. 5] - Update to v0.3.4 (fixed clearValue, improved reset, new clear button/trigger)
[GMT 23:14 Nov. 5] - Update to v0.3.5 (improved clear trigger and transform capabilities)
[GMT 10:04 Nov. 6] - Update to v0.3.6 (added history capabilities) [GMT 13:40 Nov. 7] - Update to v0.3.7 (removed a faulty JS hack - It's important to go ahead with this update!!!)
[GMT 17:02 Nov. 12] - Update to v0.3.8 (improved setValue function)
[GMT 10:35 Nov. 17] - Update to v0.4 (improvements and fixes + cleaner code; full changelog on the demo page)
[GMT 14:19 Nov 20] - Update to v0.4.1 - LIVE DEMO IS NOT YET UPDATED TO USE THE LATEST VERSION! (Having problems with accessing the webhost)
IE6. Error in line 51 "Console is not defined".
Thanks in advance.
Nice job, Andrei!
Nice job, Andrei!
This extension is very useful for the application I'm working on it right now.
P.S.: o treaba buna...
@galdaka - Pfff.. Ok, I will switch to another way of showing the values of the fields. Right now it is using console.log which supposedly it was available on IE in the debug version of ExtJS. It will get fixed today
@lucian - mersi
The component has now been updated and has some fixes, but most importantly some enhancements (e.g. deleting the value by keyboard - the feature was available in 0.2 but wasn't included in 0.3).
The testcase uses form textfields to show components' values.
The testcase also now features a comparison between Select and the formal ComboBox.
[PS: I'm not able to reach my hosting server for the moment, so the Live Demo is not yet updated]
PS: Live Demo now Online
- Join Date
- Mar 2007
- Location
- Haarlem, Netherlands
- 1,235
- Vote Rating
- 4
I like the extention but whats up with the name lol :p
@TommyMaintz - what's wrong with the name? If you are talking about Andrie - it's not a mispelling from Andrei.. and if you are talking about why it is not Ext.ux.Select, you find the explanation in the starting post of this thread.
- Join Date
- Mar 2007
- Location
- Haarlem, Netherlands
- 1,235
- Vote Rating
- 4
Ah yes you are right. I didnt read the small characters. Lucky me its not a contract
And indeed i thought it was based on your name. Andrie is a name sometimes used in the netherlands so i thought it would maybe be your real name.
Again really nice work, i think its worth getting the actual Ext.ux.Select namespace!
Great widget, but the items aren't deselected if you remove items from the list even though the field's text updates. I added a deselect method that also removes it from the dataview because I needed to be able to clear any selected values if the first item was clicked.
Code:
deselect:function(index){ this.removeValue(this.store.getAt(index).data[this.valueField || this.displayField]); this.view.deselect(index, this.multiSelect); },
@shade - you mean you would like to have this situation:you remove items from the store and you would like them to be removed automatically from the combobox/select ? is this what you are talking about?
|
http://www.sencha.com/forum/showthread.php?16841-2.x-Ext.ux.Andrie.Select-(ComboBox-with-multiSelect)&p=80552
|
CC-MAIN-2013-20
|
refinedweb
| 854
| 68.67
|
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 3.13, “How to add
if expressions (guards) to match/case expressions.”
Problem
You want to add qualifying logic to a case statement in a Scala
match expression, such as allowing a range of numbers, or matching a pattern, but only if that pattern matches some additional criteria.
Solution
Add an
if guard to your
case statement. Use it to match a range of numbers:
i match { case a if 0 to 9 contains a => println("0-9 range: " + a) case b if 10 to 19 contains b => println("10-19 range: " + b) case c if 20 to 29 contains c => println("20-29 range: " + c) case _ => println("Hmmm...") }
Use it to match different values of an object:
num match { case x if x == 1 => println("one, a lonely number") case x if (x == 2 || x == 3) => println(x) case _ => println("some other value") }
You can reference class fields in your
if guards. Imagine here that
x is an instance of a
Stock class that has
symbol and
price fields:
stock match { case x if (x.symbol == "XYZ" && x.price < 20) => buy(x) case x if (x.symbol == "XYZ" && x.price > 50) => sell(x) case _ => // do nothing }
You can also extract fields from case classes and use those in your guards:
def speak(p: Person) = p match { case Person(name) if name == "Fred" => println("Yubba dubba doo") case Person(name) if name == "Bam Bam" => println("Bam bam!") case _ => println("Watch the Flintstones!") }
Discussion
You can use this syntax whenever you want to add simple matches to your
case statements on the left side of the expression.
Note that all of these examples could be written by putting the
if tests on the right side of the expressions, like this:
case Person(name) => if (name == "Fred") println("Yubba dubba doo") else if (name == "Bam Bam") println("Bam bam!")
However, for many situations, your code will be simpler and easier to read by joining the
if guard directly with the
case statement.
The Scala Cookbook
This tutorial is sponsored by the Scala Cookbook, which I wrote for O’Reilly:
You can find the Scala Cookbook at these locations:
Add new comment
|
https://alvinalexander.com/scala/how-to-use-if-then-expressions-guards-in-case-statements-scala
|
CC-MAIN-2017-39
|
refinedweb
| 379
| 67.49
|
BrandonSnider 102 Report post Posted July 15, 2012 (edited) I've worked with other languages in the past including basic and a lot of scripting languages, and I have worked a decent bit on C++ too. I haven't worked on C++ in quite a while now, and last night I was trying to refresh myself on the basics... so--and this is a little embarrassing--I wrote this small piece of code for a console program to send my girlfriend: [CODE] #include "stdafx.h" #include <iostream> #include <string> using namespace std; int _tmain(int argc, _TCHAR* argv[]) { //Vars string username = ""; //Execute cout << "Hello \n"; cin.get(); cout << "What is your name? "; getline (cin, username); cout << "Your name is: " << username; cin.get(); if (username == "Tiffany" || "tiffany" || "Tiffany McClure" || "tiffany McClure" || "Tiffany Mcclure" || "tiffany mcclue") { cout << "Your name is Tiffany... \n The Creator has a message for you: \n I love you Cupcake"; cin.get(); } else { cout << "Your name is not Tiffany."; cin.get(); } return 0; } [/CODE] The problem is... it doesn't appear that the "if (username == "Tiffany" || "tiffany" || "Tiffany McClure" || "tiffany McClure" || "Tiffany Mcclure" || "tiffany mcclue")" statement evaluates correctly, as the program always displays the cout message in the "if" block, even if the if statement should be false. I know these are beginner C++ concepts that have nothing to do with game development, but... this is like the only forum acct. I have for anything like this, and I hate to create another just to ask this somewhat stupid question. I appreciate any help with this, I'm trying to pick up C++ again so that maybe I can do something useful with it. EDIT: Oops. I haven't been on this site in a while. Forgot there was a "For Beginners" Section. This probably belongs there. Sorry about that. Edited July 15, 2012 by bls61793 -1 Share this post Link to post Share on other sites
|
https://www.gamedev.net/forums/topic/627964-c-beginner-i-hate-to-post-here-but-why-wont-my-if-statement-evaluate-a-string/?page=1
|
CC-MAIN-2017-34
|
refinedweb
| 319
| 81.33
|
Learn how to build an eCommerce site that uses Vue for dynamically handling products and utilizes Vuex to correctly manage the state of your shopping cart.
Some people view the use of Vuex, a state management library, as quite a big step up from using Vue on its own. The concept of state management can sound a bit scary, and, to be fair, some state management libraries can be quite difficult to fully grasp (I’m looking at you, Flux and Redux!).
Vuex, on the other hand, makes the process a whole lot easier to manage and should really be a tool that is utilized whenever required.
If you are reading this article, it is likely that you already know how to emit events from child components and know how to update state in a regular Vue app. So if you were tasked with building a shopping cart and wanted to be able to add items to it, you would know how to do so.
If not, it might be worth reading over this article that covers how to emit in Vue. Give that a read, then feel free to come back here once you feel comfortable with emitting events, as it is a super important concept to understand!
Today we will be creating a mini eCommerce site/app with Vue and Vuex. We will be using Vue-cli to quickly scaffold our app. For those unaware of what Vue-cli is, check out the link to the official docs here. We’ve opted to use the manual set-up option within Vue-cli, which allows us to pick Vuex as an optional add-on. This means that Vuex will automatically be added to our app by default and it will also create a store.js file for us . This file will contain our app’s state data.
Note: Adding Vuex in this way is not a requirement, and you can otherwise choose to add Vuex via npm i vuex.
Let’s show you what our default store.js file looks like:
import Vue from 'vue'
import Vuex from 'vuex'
Vue.use(Vuex)
export default new Vuex.Store({
state: {
},
mutations: {
actions: {
}
})
You’ll notice that just after the imports, we have Vue.use(Vuex).
This is super important, as it basically enables the ability to then give all of our child components access to our Vuex store through the use of this.$store. We complete this process by including our store inside of our Vue object, which we will see next.
this.$store
So we also have a main.js file, which handles the rendering of Vue into our app. The file looks like this to begin with:
import App from './App.vue'
import store from './store'
Vue.config.productionTip = false
new Vue({
store,
render: h => h(App)
}).$mount('#app')
As you can see, we import our Vuex store on line 3 and then add it inside of our new Vue object (see line 8) that gets rendered and mounted to the page. This completes the process of ‘injecting’ our store into every component.
We can go ahead and delete any bits of code that we don’t need, such as the HelloWorld.vue file, along with the Vue logo.
We then go about creating all of the components we are going to need. In essence, we will require an Item component, which will contain details of the item, along with a size-picker and an ‘add to cart’ button. These could have been made more modular by creating separate sub-components, but I have opted against this for brevity.
Once we’ve built all of our initial components, we have an app that looks like this:
All of our content is in place, and our items have their individual buttons - but nothing actually happens if any of the buttons are clicked. Let’s start building those parts with some super awesome Vuex state management!
So our shopping cart is actually already returning information from our store which is great, as it means that the shopping cart is able to access data from our state. This isn’t something that is set up by default, though. So how is this working? Well let’s take a look at what we have set up so far.
App.vue
<
template
>
div
id
=
"app"
class
"header"
h1
>The Boot Store</
shopping-cart
:cart
"shoppingCart"
></
</
section
"items-container"
item
v-for
"product in products"
:key
"product.key"
:item
"product"
If we observe the bits of code above, it looks quite similar to how we would usually set this up using just plain old Vue.
On this assumption, it would be likely that the :cart=”shoppingCart” prop is holding data on the cart. And likewise, the v-for=”product in products” is looping through all of the products. This would be a correct assumption to make.
:cart=”shoppingCart”
v-for=”product in products”
The only thing to remember here is that this data isn’t coming from inside of our root App.vue file. It’s coming from our store.js file. So how does it get there? Let’s take a look at our computed properties from App.vue below:
computed: {
shoppingCart() {
return this.$store.state.cart
products() {
return this.$store.state.items
Put simply, we create two functions that return data from this.$store. We then call these two computed functions inside of the template, which we saw previously. We could have skipped the process of creating these simple return functions by doing this instead:
:cart=”$store.state.cart”
and
v-for="product in $store.state.items"
And it would have still worked, but this can get unruly. It would also avoid the use case of computed properties in general - which is that you pass them data that gets cached, and if the data changes, the computed property will re-evaluate and return the new result. So we take advantage of this when writing our computed properties. It also has the added benefit of keeping our template view a bit cleaner.
Note: I should also mention that Vuex’s documentation talks about a mapState helper, which can be used in verbose apps that would otherwise need to lean on making lots and lots of computed property functions. Because our app is not going to be leaning on this too much, we will not be making use of mapState. If, however, you are reading this article with a view to building a huge application, I’d highly recommend reading up on mapState as it can be pretty useful! You can check out the link in the docs here. Ahead of time, I’ll also note that there are map helpers for all of the core concepts that we will be looking at in this article, but none will be used for the sake of brevity.
Okay, so computed properties inside of child components are being used here to simply return data from this.$store. That’s cool, but what about when we want to use computed properties like we normally do in Vue? Well, we could just write the same code that we normally do, but this wouldn’t be fully taking advantage of Vuex’s capabilities. We also want to be writing computed properties inside of our store.js that we can use throughout our application. So can we just write computed properties inside of store.js? Well, yes we can! But they look a little bit different. Enter getters!
Getters are essentially computed properties. Like computed properties, a getter’s result is cached based on its dependencies, and will only re-evaluate when some of its dependencies have changed. A slight difference with traditional computed properties is that the functions we create inside of getters will always need to be passed state as a parameter. We will take a look at an example that we be using inside of our eCommerce app after the next paragraph.
So with our shopping cart, we want it to contain the contents of each product that gets added to it. But each item is likely to be an object (which contains the product’s ID, name, size and price). Our shopping cart is also going to display the total price. We can write a getter function that looks at the contents of the shopping cart, grabs the price of each item, adds them together and returns the sum.
Let’s take a look:
getters: {
total: state => {
if(state.cart.length > 0) {
return state.cart.map(item => item.price).reduce((total, amount) => total + amount);
} else {
return 0;
Not sure how map and reduce work? I suggest you click here.
We’ve wrapped the return inside of an if statement, so that if the cart is empty, we show the total price as 0.
We then want to pass this.$store.getters.total down to the right place in our app. You’ll also notice that we’re referencing $store.getters this time instead of $store.state which makes sense since we just made a getter function.
this.$store.getters.total
$store.getters
$store.state
Now we could pass this either straight into our ShoppingCart.vue, but let’s continue the initial design decision made earlier to create computed functions inside of App.vue that simply return the data held in the store.
So let’s go ahead and add a function that does this:
totalAmount () {
return this.$store.getters.total
This leaves our computed properties section inside of App.vue currently looking like this:
Finally, we pass totalAmount down as a prop to ShoppingCart.vue by passing it to the <shopping-cart> tag inside of App.vue, like so:
totalAmount
<shopping-cart>
:total
"totalAmount"
We can then reference the total in our ShoppingCart.vue component by simply writing this:
p
>Total:${{total}}</
And, just in case you were wondering, the dollar sign is here to simply put a literal dollar sign at the start of the price. It isn’t required for any sort of Vue syntax, such as this.$state - just thought I should clear that up!
So now our app is starting to come along quite nicely, and we’ve already utilized two of Vuex’s five core concepts!
Okay so we have our shopping cart displaying some data, but how about actually making the ‘Add to Cart’ buttons work so that we can go about adding things to our cart? Let’s take a look!
The mutations property is kind of similar to the methods property that you would have in a standard Vue app. But when we use Vuex, we cannot modify anything inside of the store’s state directly. So in order to modify state, we must write a mutation that will handle this for us.
Similar to getter properties, we will be passing state as a parameter to any function that we create. In our case, we want to write a function that adds a product to our cart. The product in question will be added whenever a user clicks the ‘Add to Cart’ button that belongs to the particular product.
So far, our function looks like this:
addToCart(state) {
Now imagine that we were writing this app without Vuex. Our addToCart() function would likely emit some data along with it, in order for our state to know what product was being added to the cart. With Vuex, functions inside of our mutations can also accept an additional parameter which acts as a payload to carry some data with it.
addToCart()
So let’s add that in:
addToCart(state, payload) {
If “payload” sounds like a strange word, that’s because it is. In this context, it is basically the technical term for saying that we can send something into the function, like a string, an integer, an array, an object, etc.
We can then write a bit of code that simply pushes payload into our cart, like so:
return state.cart.push(payload);
Okay, so we’ve written the mutation.
But we can’t just go to our child components and write something like this.$store.mutations.addToCart, because that wouldn’t work. So how do we actually just call these mutation functions? Enter store.commit!
this.$store.mutations.addToCart
store.commit!
So we are going to take a slightly different approach from some of the previous examples we have encountered with calling state and getters. We won’t be adding any sort of computed property that returns the function we just created. Instead, we are going to go straight into Item.vue and we’ll create a method.
The method will have the same name of addToCart - though you should note that this wasn’t necessary. I simply felt it was appropriate to give the commit function the same name as the mutation function so that it was easier to remember.
addToCart
The function looks like this:
methods: {
addToCart(item) {
this.$store.commit('addToCart', item)
What this is doing is simply calling the mutation that we made with the same name, and is passing it the item - which, if we remember from before, is basically the entire product object.
We then attach this onto the button inside of Item.vue as such:
button
@
click
"addToCart(item)"
>Add To Cart</
Now whenever we click on the ‘Add To Cart’ button, it adds the product object into the cart. The beauty here is that, whenever we add an item to the cart, the ‘No. of Items’ in the cart increases by 1 and the Total updates with the current total amount! How amazing is that?!
But we’re not finished yet.
Although our item is being added to the cart, our function currently adds the entire contents of the product into the cart (so name, price, all available sizes, image, etc). It currently pays no attention to what size boot has been selected.
This obviously isn’t good. So let’s go fix that!
Now with the size picker, I have decided that this is something that would be better being handled inside of local state (i.e. inside of Item.vue). The reason being is that this is the only place where the selected size needs to reside, and we would be unnecessarily adding a lot of overhead here when it is not required.
So with this in mind, we have added the following v-model to our size-picker part inside of Item.vue:
v-model
"size"
option
"size in this.item.sizes"
>{{size}}</
And then in the data part:
data() {
return {
size: ''
This also has the added benefit of setting the default selected size to a blank string. So if we wanted, we could add some validation in to prevent a user from being able to add a pair of boots to the cart if a size has not been selected.
Now when a user picks a size, the size inside of data() will be updated. We are then going to pass this in to the payload we set up earlier.
data()
As you may remember, the payload would automatically add the entire item object (including all of the sizes). We will edit this by manually passing in certain data, and, in doing so, will overwrite the part that takes in all of the sizes and will replace it with just the size that the user has selected. Let’s take a look:
this.$store.commit({
type: 'addToCart',
id: item.id,
shoe: item.name,
size: this.size,
price: item.price
So this looks like quite a lot more code to set up a this.$store.commit, but essentially all we have done here is pass an object in to the commit instead.
this.$store.commit
We set up a type, which is simply the name of the mutation. Then instead of passing the entire item, we pass in individual parts of the item. When we get to the size, we can then pass in this.size which will grab the selected size. In fact, we can add a little bit more to this to do the validation that we mentioned earlier:
if(this.size !== '') {
So now, our code will only add an item to the cart if a size has been selected! How neat!
Actions and Modules are the two other core concepts in Vuex. Our shopping cart doesn’t really require these, so we won’t cover them in too much detail, but I’d still like to give you a brief overview of them.
Actions are similar to committing a mutation. The difference being that mutations are synchronous, so whenever we commit one, it will fire immediately. Actions are useful when we are dealing with asynchronous code.
For example, if we needed to pull in data from an API before committing a mutation, we would look to utilize actions in conjunction with mutations. Our shopping cart application does not require this, but if yours does, I strongly recommend you read into the Vuex documentation on actions for a primer.
Modules are useful for those occasions when you are writing a complex application that has a lot of tentacles and has a ton of stuff going on. They allow you to break up your single Vuex store into smaller fragments in order to help it become more manageable and less unruly. Again, I recommend Vuex’s page on Modules for more information.
We have built an eCommerce application that use Vue for handling reactivity and, most importantly, utilizes Vuex to manage the state of the app!
If you would like to take a look at the code for this app, check out the Github repository here:
For more info on Vue:.
Looking to use Vuex with Kendo UI for Vue? Check out this quick guide.!
|
https://www.telerik.com/blogs/learn-how-to-use-vuex-by-building-an-online-shopping-website
|
CC-MAIN-2019-51
|
refinedweb
| 2,957
| 72.36
|
On Wed, Jul 7, 2010 at 00:19, Jim Meyering <address@hidden> wrote: > A Burgie wrote: > Start by reading README* and HACKING. > The GNU Coding Standards has plenty of useful background info. > Run "info standards" or see. > > ... >> ** New features >> >> + stat now shows the mountpoint of a specified file or directory >> + in its default output. It also will show this when a format is >> + explicitly specified through the use of the %m specifier. > > As discussed, I'd rather not change the default output. > > stat can now print the mount point of a file via its new %m format directive Sorry... old version of NEWS. It wasn't really there at the time and will not be in the future. >> - du now uses less than half as much memory when operating on trees >> - with many hard-linked files. With --count-links (-l), or when >> - operating on trees with no hard-linked files, there is no change. > > Oops. Your patch would revert the two recent additions to NEWS, > above and below. rebasing is my friend.... rebasing is my friend. It'll be right by shipping time. >> -** Bug fixes >> - >> - du no longer multiply counts a file that is a directory or whose >> - link count is 1, even if the file is reached multiple times by >> - following symlinks or via multiple arguments. >> >> * Noteworthy changes in release 8.5 (2010-04-23) [stable] >> >> diff --git a/doc/coreutils.texi b/doc/coreutils.texi >> index 21cf36d..ea3f142 100644 >> --- a/doc/coreutils.texi >> +++ b/doc/coreutils.texi >> @@ -10666,6 +10666,7 @@ The valid @var{format} directives for files with >> @option{--format} and >> address@hidden %G - Group name of owner >> address@hidden %h - Number of hard links >> address@hidden %i - Inode number >> address@hidden %m - Mount point >> address@hidden %n - File name >> address@hidden %N - Quoted file name with dereference if symbolic link >> address@hidden %o - I/O block size >> diff --git a/gnulib b/gnulib >> --- a/gnulib >> +++ b/gnulib >> @@ -1 +1 @@ >> -Subproject commit 7773f84fe1aa3bb17defad704ee87f2615894ae4 >> +Subproject commit 7773f84fe1aa3bb17defad704ee87f2615894ae4-dirty >> diff --git a/src/Makefile.am b/src/Makefile.am >> index 0630a06..f090087 100644 >> --- a/src/Makefile.am >> +++ b/src/Makefile.am >> @@ -145,6 +145,7 @@ noinst_HEADERS = \ >> copy.h \ >> cp-hash.h \ >> dircolors.h \ >> + findmountpoint.h \ >> fs.h \ >> group-list.h \ >> ls.h \ >> diff --git a/src/stat.c b/src/stat.c >> index c3730f0..f283437 100644 >> --- a/src/stat.c >> +++ b/src/stat.c >> @@ -68,6 +68,9 @@ >> #include "quotearg.h" >> #include "stat-time.h" >> #include "strftime.h" >> +#include "findmountpoint.h" > > nameslikethis are hard to read. > I prefer find-mount-point.h. Done thoughout. >> +#include "save-cwd.h" >> +#include "xgetcwd.h" >> >> #if USE_STATVFS >> # define STRUCT_STATVFS struct statvfs >> @@ -612,6 +615,7 @@ print_stat (char *pformat, size_t prefix_len, char m, >> struct stat *statbuf = (struct stat *) data; >> struct passwd *pw_ent; >> struct group *gw_ent; >> + char * mp; > > Remove the space-after-"*". Done. >> bool fail = false; >> >> switch (m) >> @@ -679,6 +683,14 @@ print_stat (char *pformat, size_t prefix_len, char m, >> case 't': >> out_uint_x (pformat, prefix_len, major (statbuf->st_rdev)); >> break; >> + case 'm': >> + mp = find_mount_point (filename, statbuf); >> + if (mp) { >> + out_string (pformat, prefix_len, mp); >> + } else { >> + fail = true; >> + } > > Your brace-using style is inconsistent with the rest of the code. > Drop them in this case, since those are one-line "if" and "else" bodies. Wondered about that. I see in HACKING it talks about this. Fixed. >> + break; >> case 'T': >> out_uint_x (pformat, prefix_len, minor (statbuf->st_rdev)); >> break; >> @@ -1025,6 +1037,7 @@ The valid format sequences for files (without >> --file-system):\n\ >> fputs (_("\ >> %h Number of hard links\n\ >> %i Inode number\n\ >> + %m Mount point\n\ >> %n File name\n\ >> %N Quoted file name with dereference if symbolic link\n\ >> %o I/O block size\n\ >> >> commit 70d1f1c97f322a164ee872c0399d9fbccc862b18 >> Author: Aaron Burgemeister <address@hidden> >> Date: Tue Jul 6 18:01:53 2010 -0600 >> >> broken-20100707000000Z >> >> diff --git a/src/findmountpoint.c b/src/findmountpoint.c >> new file mode 100644 >> index 0000000..665e2fc >> --- /dev/null >> +++ b/src/findmountpoint.c >> @@ -0,0 +1,93 @@ > > Every .c file must first include <config.h>. Ah ha. I added a bit more to actually get things where they are now but that is good to know. >> +#include "save-cwd.h" >> +#include "xgetcwd.h" >> + >> + >> +/* Return the root mountpoint of the file system on which FILE exists, in >> + * malloced storage. FILE_STAT should be the result of stating FILE. >> + * Give a diagnostic and return NULL if unable to determine the mount point. >> + * Exit if unable to restore current working directory. */ > > We don't use this style of comment. > Remove the "*" on continued lines. Sorry... I copied from an example I found, I think in shred.c, though the second example is wrong while the first is correct. Noted for the future and fixed. >> +static char * >> +find_mount_point (const char *file, const struct stat *file_stat) >> +{ >> + struct saved_cwd cwd; >> + struct stat last_stat; >> + char *mp = NULL; /* The malloced mount point. */ >> + >> + if (save_cwd (&cwd) != 0) >> + { > > You have reindented this function (changing > the brace positioning style to be contrary to the rest of coreutils). Finally figured out about the ':set [no]paste' stuff in vi so I can get around this. Hopefully fixed once and for all. > > The #ifndef...#endif is supposed to span the contents of the file. Fixed, and suddenly the point of compile-time conditionals comes back to me. >> + >> +/* Return the root mountpoint of the file system on which FILE exists, in >> + * malloced storage. FILE_STAT should be the result of stating FILE. >> + * Give a diagnostic and return NULL if unable to determine the mount point. >> + * Exit if unable to restore current working directory. */ > > Please remove this comment. > It duplicates the one in the .c file. Done. >> +static char * find_mount_point (const char *, const struct stat *); I think I can figure out a changelog. The one thing left, though, is a bit less cosmetic. Something about my makefile (which has the two new _SOURCES sections you suggested) is still not letting make compile everything properly. stat.o: In function `print_stat': /home/aburgemeister/code/coreutils/src/stat.c:687: undefined reference to `find_mount_point' collect2: ld returned 1 exit status make[3]: *** [df]
DIFF0
Description: Binary data
|
http://lists.gnu.org/archive/html/bug-coreutils/2010-07/msg00057.html
|
CC-MAIN-2018-26
|
refinedweb
| 1,001
| 68.06
|
Dungeon Game
Introduction
Dynamic programming is an all-time favorite topic when it comes to tech interviews, and solving its classical problems surely helps you to clear the basics. But the problem, the dungeon game is not like any other DP problem, it sure looks familiar, but it isn’t. Now without wasting any more time, let’s jump to the problem.
Understanding the Problem
We have been given a dungeon (M x N grid). Every cell in the grid contains a number. The number may be positive or negative. We can move across a cell only if we have positive points (>0). Whenever we pass through a cell, points in that cell are added to our overall points.
We need to find the minimum initial energy(also referred to as health) required to reach cell (M - 1, N - 1) where our princess is waiting from (0, 0) by following these certain set of rules:
- From a cell (i, j) we can move to (i + 1, j) or (i, j + 1).
- We cannot move from (i, j) if your overall points at (i, j) are <= 0.
- We have to reach at (M - 1, N - 1) with minimum positive points, i.e., > 0.
Let's understand this better by the following example.
Let say the following dungeon is given to us,
Now 5 would be the minimum initial points needed to reach the end. The path taken is as follows. (This is just one of the possibilities)
(0,0) -> (0,1) -> (0,2) -> (1, 2) -> (2, 2).
Explanation
As per our chosen path, at position (0,0), the health will be decreased by -1. Then we will go down, so now this -3 will add up to -1. After that, we can see all are positive healths, so we will not lose any more health. Thus in total, we need health = -(-1 - 3) + 1 = 5.
This might be a bit odd for you to grasp but believe me, you’ll get the idea as we will proceed further.
Let’s first look at the brute force approach and then, we will optimize it further.
Brute Force
The best idea is to devise the recurrence relation. The recurrence relation is quite intuitive, as we have only two options to choose from: either go down or right. So, at any point of a particular cell, if we know the answer for min health requirement if we go right vs we go down, then we can easily choose between them.
We will create a helper function getVal(MAT, i, j), which will take three arguments, ‘MAT’ (dungeon) and two variables, ‘i’ and ‘j’. The variables ‘i’ and ‘j’ will represent the row and column index and will help us to iterate over the dungeon. Let’s look at the working of the function
- Base Case:
1. If we go out of bounds, i.e., i == ’N’ or j == ’M’, simply return INF.
2. If we reach the princess, i.e., i == ’N - 1’ and j == ’M - 1’, we will check if the value of ‘MAT[i][j]’ is positive or negative. If it is positive, we will return 1 (given in the problem statement as we need a minimum of 1 energy to cross the cell), and if it is negative, we will return -MAT[i][j] + 1 because this is the energy that will get exhausted.
2. Recursive calls:
- As we are allowed to go only right or down, thus we will make two calls to this recursive function and store their answers in two variables ‘RIGHT_COST’ and ‘DOWN_COST’.
- Now the ‘MINIMUM_HEALTH’ required would be the minimum of the two above values minus the value of matrix at that point, i.e., min( RIGHT_COST, DOWN_COST) - MAT[i][j].
But this is not our final answer. As done with the base case earlier, we will check if ‘MINIMUM_HEALTH’ is positive or negative. If it’s positive, we will return 1 as this is the minimum amount of energy required to cross a cell. Otherwise, we have to return
-MINIMUMHEALTH + 1.
Below is the code for the above implementation.
Code
#include <iostream> #include <vector> #include <algorithm> using namespace std; int getVal(vector<vector<int>> &mat, int i = 0, int j = 0) { int n = mat.size(); int m = mat[0].size(); // Base case. When we go out of bound of dungeon. if (i == n || j == m) return 1e9 + 5; // When we have reached our destination. if (i == n - 1 and j == m - 1) return (mat[i][j] <= 0) ? -mat[i][j] + 1 : 1; // Making recursive calls for both the options. int rightCost = getVal(mat, i, j + 1); int downCost = getVal(mat, i + 1, j); // Finding min health required using above values. int minHealthRequired = min(rightCost, downCost) -mat[i][j]; /// Return final answer. return (minHealthRequired <= 0) ? 1 : minHealthRequired; } int calculateMinimumHP(vector<vector<int>> &dungeon) { return getVal(dungeon); }(2 ^ (M + N)), where ‘M’ is the number of rows of the dungeon matrix, and ‘N’ is the number of columns.
As every time we are making two calls to the recursive function.
Space Complexity
O(M + N), where ‘M’ is the number of rows of the dungeon matrix, and ‘N’ is the number of columns.
This is the height of the recursion tree and hence the space used by the recursion stack.
Now we don’t want exponential time complexity, and if we look very closely, we can see there are overlapping subproblems. When we hear overlapping subproblems, the first thing that comes to mind is dynamic programming. But this problem is a bit different than usual dynamic programming problems, so let’s first look at the thought process for the same.
DP Thought Process
When you’re in some room in the dungeon, you need enough health to do two things.
- Survive the health cost of the room, and
- Have enough health left over to move to a different room, either right or down.
If we add these two together, the sum represents the amount of health we need to exist at a particular location. Luckily, we already know one of these: the amount of health we need to survive the health cost of the room.
What about the second one?
The amount of health we need to move to a different room is simply the amount of health we need to exist in another room, which is the exact kind of thing we’re calculating! This means that we can work backward from the end, where the princess(DUNGEON[M - 1][N - 1]) is, up to the start to get our answers.
Remember that since we want to minimize the health we need at the start, of the two actions we can take, going right or down, we will take the one that requires the less amount of health.
But what about the end?
You can’t move to a different room once you hit the end!
Yes, this is correct! That’s why we’ll have to do something special for the end, and pretend that the cost we need to move to a different room is 1. This works perfectly fine because it is the exact same thing as saying that once we hit the end and pay the associated health cost, we must have at least 1 health left over to be alive! This is the end criteria we need to meet, which we know from the problem statement itself.
Approach and Algorithm
To begin, we should keep a 2D array ‘DP’ of the same size as the grid, where DP[i][j] indicates the minimum points that ensure the travel to the destination will continue before entering the cell (i, j). Now DP[0][0] is our ultimate solution which is quite obvious. As a result, we must fill the table from the bottom right corner to the left top in order to solve this problem.
But how will we fill the table?
Before filling the table, let us first determine the minimum health required to exit cell (i, j). Remember we are moving from bottom to up; thus, only two options are available: (i+1, j) and (i, j+1). Of course, we'll pick the cell where we can complete the rest of his journey with fewer health. As a result, we have:
MINIMUM_HEALTH_ON_EXIT = min(DP[i + 1][j], DP[i][j + 1])
Now that we know how to calculate MINIMUM_HEALTH_ON_EXIT, we must populate the table DP[][] in order to obtain the solution in DP[0][0].
Also, we have to reduce the health required to stay in the current cell so that we will store our probable answer in ‘VAL’.
VAL = MINIMUM_HEALTH_ON_EXIT – DUNGEON[i][j]
How do you calculate DP[i][j]?
Hence the following is a representation of the value of DP[i][j].
DP[i][j] = max(VAL, 1)
Now let's look at the implementation of the above approach.
Code
#include <iostream> #include <vector> #include <climits> using namespace std; int calculateMinimumHP(vector<vector < int>> &dungeon) { int r = dungeon.size(); // Base case: If the size of the dungeon is zero. if (r == 0) return 0; int c = c = dungeon[0].size(); // We will create a DP table and initialize it to INT_MAX. vector<vector < int>> dp(r + 1, vector<int> (c + 1, INT_MAX)); // Initializing the cell to the bottom and right of the princess' cell with value 1(As minimum 1 energy is needed). dp[r - 1][c] = 1; dp[r][c - 1] = 1; // Looping over DP table. for (int i = r - 1; i >= 0; i--) { for (int j = c - 1; j >= 0; j--) { // (Minimum healthvalue to land on next) - (health needed to stay) int val = min(dp[i + 1][j], dp[i][j + 1]) - dungeon[i][j]; // Now answer would be either 1 or 'VAL' that we calculated. dp[i][j] = max(1, val); } } // Returning the answer that is stored in DP[0][0]. return dp[0][0]; }(M * N), where ‘M’ is the number of rows of the dungeon matrix, and ‘N’ is the number of columns.
As we are using two nested loops first one till ‘M’ and the other one till ‘N’, thus the time complexity is O(M * N)
Space Complexity
O(M * N), where ‘M’ is the number of rows of the dungeon matrix, and ‘N’ is the number of columns.
As we are using a DP array of size M * N.
Key Takeaways
We first saw the brute force approach to solve the problem. We gradually build the recursive relation and saw how overlapping subproblems can be dealt with using dynamic programming.
You must be thrilled to learn the idea and intuition to approach similar DP problems. But learning never stops, and a good coder should never stop practicing. So, head over to our practice platform CodeStudio to practice top problems and many more. Till then, Happy Coding!
|
https://www.codingninjas.com/codestudio/library/dungeon-game
|
CC-MAIN-2022-27
|
refinedweb
| 1,807
| 71.14
|
asp:review
PowerTCP Components for .NET
By Brian Noyes
A common requirement for Web and Windows applications is the need to use Internet protocols to connect and get information and data from other servers on the Internet or intranet. There is some basic network protocol support in the System.Net namespace of the .NET Framework, but for many common protocols, the built-in support is either non-existent or lacking in features or productivity for most real apps.
PowerTCP Components for .NET fill this network protocol void in the .NET Framework very nicely with a set of easy to use components that let you integrate into your applications support for POP, IMAP, SMTP, Telnet, FTP, and socket-based protocols. The components are written in 100% managed code, so you won't run into any hidden performance penalties for interop under the covers, or the related security issues for calling into unmanaged code. This also simplifies the deployment of your app, because the components can all be xcopy deployed to any supported .NET platform. The component suite also includes secure variants of the mail, FTP, and sockets components that work with encryption and authenticated environments. Using PowerTCP Components for .NET will save you a ton of time writing low-level plumbing code and will instead let you focus on your business logic or presentation coding.
I first came across the PowerTCP components while searching for a good mail component for an application. After comparing several other mail components, I found that I liked the PowerTCP Mail for .NET component the best for a number of reasons, all of which apply to the other components in the suite. For starters, the site has good detailed information about the object model and contained components, as well as sample code and tutorials, that let me quickly get a sense of what I would be working with. This saved me from having to download and install a trial product and start writing code to figure out whether I was even interested, which is often required with other vendors' components.
I also liked what I saw when I did start looking at their object model. The components appeared well designed and consistent with .NET Framework classes in the way they exposed their behavior and state through methods, properties, and events, as well as the use of collections and streams where it made sense. They also offer synchronous and asynchronous handling for most things in a way that models the implementation in delegates and readers/writers in the framework. Based on the information available and the clean design I was able to quickly determine that the PowerTCP Components for .NET would give me the best flexibility, but still result in clean, concise, consistent code.
The PowerTCP Mail for .NET components contain capabilities for handling POP3, IMAP, and SMTP protocols for sending and receiving e-mails. They are meant for use on the client side, and are easily integrated into existing Web or windows applications. For example, I found it a quick and easy exercise to create a simple Web interface that showed a listing of e-mails residing on a separate POP server using these components by getting back a collection of header information and using standard .NET data binding to display the results. You could use these to add a Web mail front end on any mail server as part of your site. There is also a great MIME parser in this part of the component suite that could be very useful even outside the e-mail send and receive scenario. The secure variant of this component suite simply layers encryption and authentication capabilities onto the base capabilities for those who need it.
Figure 1. Putting together an app that uses the PowerTCP components is a piece of cake. Shown here is a simple Web mail viewer of my POP inbox, created in less than 10 lines of code. Simply log in, iterate through the Messages collection, and populate a DataSet for data binding to the DataGrid. This literally took me less than 15 minutes to figure out.
The FTP components allow you to quickly add client-side access to FTP servers to your applications. The primary components you use let you connect to the server, and you can get and set files using intuitive methods on the main Dart.PowerTCP. component, or you can execute any of the FTP commands by calling an Invoke method with an enumeration argument for the various supported commands.
Although there is decent socket support in the .NET Framework, it takes a lot of code to implement things right. If you want to speed your socket coding efforts considerably, the PowerTCP Sockets and SSL Sockets for .NET components are worth a look. They provide a rich object wrapper on top of what the framework provides, allowing you to focus on your application logic rather than on plumbing code. Using these components, you can easily make calls using many socket-based network protocols, such as Ping, Trace, UDP, DNS, and others. You can create multithreaded servers and clients to implement secure and high-performance communication channels between your apps and others, potentially on other platforms. Likewise, the PowerTCP Telnet for .NET components let you add Telnet client communications to your apps with minimal code.
The site also mentions some upcoming editions to the suite, including SNMP, a Web server, and Zip compression, but with unspecified release dates. Overall, I have been very pleased with these components. They come with a lot of sample code, good documentation that integrates into the MSDN Help system, and they have good support newsgroups on the site. The only minor criticism I have concerns the code samples in the individual component method and property documentation pages. There are placeholder comments in most of the samples, as if that is an intended future addition along the lines of what is provided with each .NET Framework class method and property, but for now, most of them do not display sample code at that level of detail.
If you need to add mail, FTP, socket-based network protocol, or Telnet capabilities to your site or application, the PowerTCP Components for .NET are definitely worth a look.
Rating:
Web Site:
Subscription Price: US$1,999
|
http://www.itprotoday.com/software-development/powertcp-components-net
|
CC-MAIN-2018-13
|
refinedweb
| 1,045
| 53.21
|
SYNOPSIS
#define _GNU_SOURCE /* See feature_test_macros(7) */
#include <string.h>
char *strfry(char *string);
DESCRIPTIONThe strfry() function randomizes the contents of string by using rand(3) to randomly swap characters in the string. The result is an anagram of string.
RETURN VALUEThe strfry() functions returns a pointer to the randomized string.
ATTRIBUTESFor an explanation of the terms used in this section, see attributes(7).
CONFORMING TOThe strfry() function is unique to the GNU C Library.
COLOPHONThis page is part of release 4.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
|
http://manpages.org/strfry/3
|
CC-MAIN-2021-10
|
refinedweb
| 108
| 60.11
|
Add Folder Targets
Applies To: Windows Server 2008 R2
A folder target is the Universal Naming Convention (UNC) path of a shared folder or another namespace that is associated with a folder in a namespace. Adding multiple folder targets increases the availability of the folder in the namespace.To add a folder target
Click Start, point to Administrative Tools, and then click DFS Management.
In the console tree, under the Namespaces node, right-click a folder, and then click Add Folder Target.
Type the path to the folder target, or click Browse to locate the folder target.
If the folder is replicated by using DFS Replication, you can specify whether to add the new folder target to the replication group.
|
http://technet.microsoft.com/en-us/library/cc732105(d=printer).aspx
|
CC-MAIN-2013-20
|
refinedweb
| 120
| 51.48
|
I’ll start this article with three short excerpts from the book “Things I’ve never heard at a software consultant firm”:
Project manager: “Our client wants you to work on all concrete implementations.”
Developer: “Great — they’re here in the ‘Classes’ namespace so it’s easy for me to work on all of them at once.”
Project manager: “We need to work on all our interfaces.”
Developer: “How lucky that they’re all stuffed into the ‘Interfaces’ project.”
Developer: “I’m really glad this solution is organized by type rather than by logical groupings of related code. This way I can enjoy a cruise through the entire codebase every time I work on a feature.”
Why is organizing code by type bad?
The decision of where to put code greatly influences the development workflow and how the solution architecture evolves over time.
The chance of creating a big ball of mud is much higher when using e.g. a single bucket for all concrete implementations; it leads to spaghetti code because all classes are seemingly closely related, and hence should seemingly be allowed to reference one another.
Simply put, having a namespace or project called “Interfaces” filled with every single interface in the entire solution communicates absolutely nothing about which of — and how — those interfaces are supposed to work together. The same is true for classes, enumerations, functions and whatever else your programming language of choice has to offer.
At best, organizing code by type violates the principle of least astonishment. At worst, organizing code by type will leave the development team digging through the codebase again and again in search of the pieces making up any given feature, and increases the risk of breaking feature encapsulation.
What’s a better alternative?
Start off by reading about various software architecture methodologies and make an opinionated choice fitting the project in question; revisit your choices regularly to avoid the Golden Hammer pit.
For most of the web projects I’ve worked on, I’ve advocated organizing code into modules/components. Fleshing out components has become the natural extension of class level OOD brought to bear on a larger scale for me. Hence my answer is:
Organize the code in your solution into expressive components, just like you would organize functions and functionality into expressive interfaces and classes.
The above is far from an exact science and quite honestly based on gut feelings developed over the years. I’ve tried to examplify the differences between “type based source code placement” and component based architecture in the following simplistic example which still should make the main point clear — organizing code purely by type is bad.
Example
Let’s start off with a dialog that actually has a chance of taking place between a PM and a dev:
Project manager: “Our client want’s to integrate to a new payment provider.”
Developer: “Great, I’ll get right on working with the payment provider related code.”
You’ve worked on the project for some time and seem to remember there’s both classes and interfaces related to the existing payment provider integrations.
Which source code structure gives you the best vantage point for your task?
Who’s organizing code by type?
The tendency to organize code by type is much more prevalent in small to medium sized in-house development teams than in software consultancy companies.
This statement is based on a decade working on and/or reviewing code from 10+ in-house teams and 20+ consultancy firms.
References
The following is an incomplete list of companies whose code I’ve reviewed, worked on, worked with or for some other reason have had my hands and/or eyes on. It’s not a list of companies who organize code by type.
- Djøf
- Pentia
- Brüel & Kjær
- Valtech
- Appstract
- Oxygen
- Redhotminute
- Ofir
- 1508
- Codehouse
- AKA-IT
- Creuna
- LM Windpower
- Addition
- Netcompany
- Jayway
- FK Distribution
- Bysted
- Digizuite
More career info available here.
One thought on ““A place for everything and everything in its place” — thoughts on organizing source code by type”
I totally agree, well said. I hate it when it’s organized by type and when it continues to the Sitecore structure then ir’s really hard to work with
|
https://reasoncodeexample.com/2016/03/06/a-place-for-everything-and-everything-in-its-place-thoughts-on-organizing-source-code-by-type/
|
CC-MAIN-2020-05
|
refinedweb
| 704
| 50.67
|
Are you looking for a product to sell? These apps can help you find products and automate your fulfillment and shipping.
Buy Now Button - Amazon Walmart Jet eBay etc
Free – $4.99 / month
Easily add your custom BUY NOW ON AMAZON button (and eBay/Jet/Walmart) to your Shopify pages. You choose desired products and channels.
Dropshipping by Dropwow
Free
Dropshipping in 1-click. Fulfill orders automatically, track shipping and import high quality products from best suppliers.
Troupe
Custom
Easily design & sell boutique-quality jewelry featuring your artwork. It's simple, fun, and FREE!
MonetizeSocial
Custom
Sell on demand custom merchandise world wide! No inventory, no upfront investment! Print your designs on t-shirts, hats, phone cases, etc.
Gifty Dropshipping
$4.99 / month
Premium drop ship products, with after checkout gift messaging (video, audio, picture & text), for these and all products in your store.
Amazon Associate Connector by InfoShoreApps
$4.95 / month
The easiest way to import products from Amazon to Shopify. NO TECHNICAL SKILLS REQUIRED.
ApparelPop!
$25.00 / month
Start a dropshipping business in minutes. Over 100,000 licensed apparel products. Printed and shipped in the USA.
Rocketees
$25.00 / month
Sell T-shirts? Let Rocketees be your competitive edge! Just $6 per tee, with rocket-speed fulfillment.
Printy6 ( Snaprinting) - Print on Demand for Ecommerce and Artist
Free
Free Online Cusomization Service With Big Profit! Start Selling Your Exclusive Designs Within 10 min!
Package Movers
$29.00 / month
The premier Shopify fulfilment app for customizable jewelry
DropWorx
$19.99 / month
Dropworx automates the dropshipping process by eliminating manual tasks. This frees up time to focus on improving your online store!
External Product Links By Thirsty Software
$4.99 / month
Modify your add to cart buttons so that they link to external websites. Great for linking to Amazon, eBay and other affiliate programs.
Expressfy - Aliexpress Dropshipping
CustomCat
Custom
On demand product decoration and fulfillment. Customize over 550 unique products to sell on Shopify. Orders ship in 2-3 business days...
Print Partners
Custom
Print Partners is a complete solution for Direct to Garment printing and fulfillment.
Adjustinator
Free
Personalize your Print-On-Demand products
Doba
$9.99 / month
Instantly add the products your customers are searching for directly to your Shopify store. Fast, reliable delivery direct to your customer.
MODALYST for Suppliers
Free
Increase distribution, raise brand awareness, earn higher margins, & manage 1,000s of retailers through our automated dropshipping platform.
SMAR7 Express - Fulfillment Automation
$7.99 / month
SMAR7 Express allows anyone to easily setup and completely automate their very own drop shipping business with one click fulfillment.
Spocket
From $0.00 / month
Find Dropshipping products from thousands of reliable suppliers in US/Europe/Canada/Asia. Shopify sourcing products, fast-shipping
Tshirtgang Printing & Fulfillment
Free
Your On Demand Printing Company.
Aliexpress Dropshipping
From $5.00 / month
import desired products on various Aliexpress, ebay,overstock,dhgate and many more to your store with ease
|
https://apps.shopify.com/categories/product-sourcing?page=2&sortby=newest
|
CC-MAIN-2018-26
|
refinedweb
| 483
| 53.27
|
This is an automated email from the git hooks/post-receive script. gregoa pushed a change to annotated tag upstream/0.11021 in repository libsql-translator-perl.
at 661ead4 (tag) tagging 2fe1bdb0d6d9352883e689cc044df476537d5bdb (commit) replaces upstream/0.11020 tagged by gregor herrmann on Fri Jun 12 19:09:29 2015 +0200 - Log ----------------------------------------------------------------- Upstream version 0.11021 Aaron Schrab (3): Fix POD for Schema::Index::type method Avoid warning about exiting sub with next Add trigger support to PostgreSQL producer and parser (including trigger scope) Aaron Trevena (3): added triggers and procedures added expected trigger and stored procedure output fixed triggers Alexander Hartmaier (4): Fix Oracle producer creating numeric precision statements that the test case expects (no whitespace) added myself to the list of authors test + fix for Oracle multi-column constraint generation Fix Oracle producer Allen Day (82): added Class::Base preq an adaptor for postgres. this works, but i think my primary key and here is the pgsql test script. NOTE: it will not work right now b/c i've moving files around per ky's request added a serial->int auto_increment fix, a varchar->varchar(255) workaround. BUG. the parser cannot handle 'precision' as a qualifier for 'double'. this also applies to the mysql parser. i forget what i did, but i found more bugs. we need to be able to support 'varchar' turning off debugging in t/08 workaround to get auto_increment working from PG "serial" datatype. i didn't do this right, someone fix it :| adding callbacks to Translator.pm to allow mangling of PK/FK/table names/package names (CDBI specific) adding recognition of key type "key" for table indices. adding ClassDBI producer. asdf adding capability to give 'filename' constructor arg an arrayref. cosmetic changes to autogenerated code. adding graphviz dep adding a pg src file example low hanging fruit, please read the diff below some fixes to fk method name generation. failed to add nice m-to-m mapping method b/c added width and height options for graphviz out. no docs linktable traversal seems to be working! haven't tried the code yet, but it looks good. supposedly hasa()is deprecated in favor of has_a(). it was buggy move over bacon fk references in many-to-many mappings need to refer to table column names adding strict and quotes for barewords fixes for base DBI package setting db vendor based on $translator->parser_type(). had to fix up the format_pk_method (somehow it got broken, hmm...). commenting shortcoming aliasing ugly tablename_fieldname ref methods where possible (ie create them no longer using set_up_table method. it incurs an overhead penalty by making removing changes related to primary/essential/other categorization of fields. this moving reusable (useful?) code into schema classes more functionality in Table.pm, Turnkey.pm is broken (working on it...) firming up can_link() turnkey producer still broken, will try to get it working on the plane... adding dg classes to represent schema. these are currently dependent on moved graph generating code (except hyperedges) to Graph.pm, producer adding more XMI files. these are identical models in different XMI this is pretty stable, and is ( i think ) generating usable class::dbi fix for strange interpretation of 1liner oops, missing } we need to rely on the vendor set_up_table() method b/c ddl may not have cleanup for 0.03 release adding COMMENT processing to Pg parser ya removing hardcoded colors/styles to CSS transfering functionality from branch factored out more styles to css. atom macros are now focus-aware. fixing bugs on sf turnkey tracker merging patch from Dave Cash. description: added credit for dave cash individual tables now returned as a hash of separate tt2 templates, keyed return hashref instead of hash template files shall be lowercase add file extension no warnings for redefined methods fixes for code clarity more readable tag names improved element names uri->object mapping changing from GET params to URI path encoding bugfixes in Class::DBI method generation. they were caused by bad schema aeshetic fixes, largely. many graph introspection problems solved. method name generation is still updated turnkey regression testing schema. added comments RE what role add test data m2m FK link table import mappings now generate traversal methods. whew! no autocommit silence redefined warnings Class::MakeMethods prereq for Turnkey producer. almost working. there is a package name mangling problem where the lc the template files... flipflop some refactoring. moving to being able to call $schema->as_graph to do patch for parser/producer args courtesy of darren (w/ embellishments by me) Producer::Turnkey obsolete. now rely on TTSchema parser (see concommitant replaced by TT producer incrementing for next release *** empty log message *** fix to allow GraphViz to load -- someone who understands why this is needed please comment! changes to allow subclass tables identical to superclass backing out non-callback package mangling missed a spot Amiri Barksdale at Home (3): Fix CREATE VIEW syntax. Add test for DROP VIEW IF EXISTS Release 0.11010 Andreas 'ac0v' Specht (2): fixed alter_drop_constraint for foreign keys and applying multiple changes via alter_field to a column in Postgres Producer added a working mechanism for naming foreign keys Andrew Gregory (1): Provide default index names for SQLite Andrew Rodland (3): SQLite parser: handle named constraints, and don't choke on a comment Forgot to add the test data file for r1676 Fix SQLite producer create_view so it doesn't generate statements with semicolons. André Walker (1): Make MySQL producer add NULL for every nullable field Arthur Axel 'fREW' Schmidt (76): add shim for non-lethal future development tear out useless unreserve hash whitespace changes Quote everything in SQL Server Turn off constraints before dropping tables in SQL Server Make true unique constraints if needed in SQL Server Create and parse FK constraints in SQLite fix repo url rename Shim to ProducerUtils for accuracy ignore vim swap files Release 0.11008 Whitespace take out duplicate docs remove commented copyright use warnings our > use vars better error messages for the SQLite parser quote SQLite identifiers factor quote method out of Generator::Utils start of hardcore refactoring use future stuff for SQL Server field generation add comments, better default handling initial SQLite Producer object use future stuff for SQLite field generation factor out some basic constraints factor out unique constraints add foreign_key_constraint add enum_constraint less accumulators more reduction migrate table to Generator::Role::DDL delete dead code rearrange pod migrate drop_table to future dead add remove_table_constraints to future less accumulators more reduction refactor table into more methods add drop_tables method add header_comment add table_comments add foreign_key_constraints add unique_constraints_multiple and indices migrate almost all code to Generator::Role::DDL better lazify things migrate duplicated code into role setting the quote accessors separately no longer makes sense Default SQLite quoting to off until we are capable of disabling it everywhere assign copyright to my new files fix list of numeric types for SQLite fix sizeless types and typemap for SQLite fix excepted and scalarref quoting for DEFAULTS in SQLite (and SQL Server) FINALLY A RELEASE! Add missing quote function to SQLServer producer Release 0.11012 fix a test broken by ad071409cb8f526337abbe025a63aa1e67716165 release 0.11013 release 0.11013_01 release v0.11013_02 include Moo version in a single place add missing Changes release 0.11013_03 switch to Perl Licensing Release 0.11014 Fix stupid missing version number in SQL::Translator::Schema::Object release 0.11015 release 0.11016 add missing Changes release 0.11017 Revert "Fixed autoincrement in primary keys for SQLite" release 0.011018 🎃 remove default Pg dsn Fix DROP TABLE in SQL Server Producer Test against travis add missing Changes fixup! Test against travis Ash Berlin (36): Branch for diff refactoring MAss diff changes imported from Ash's local diff-refactor branch Some work on sanely normalizing fields Some SQL_TYPE mapping stuff Fix some more normalization problems Remove breakpoints Fix test Correct constraint names in preprocess for MySQL producer Move more normalization changes to preprocess_schema Added an 'alter sequence' line to the parser grammer which will simply skip over an alter sequence statement. Fix after typo in merge Add renamed_from to tables. Start transactions in a portable manner Work round MySQL/InnoDB bug Fix BEGIN in sqlite diff test Better tests (and fix bug) in batch alter with renamed tables + constraints Merging back Diff refactor Fix warning messages Document significance of preproces_schema method Add support for COLLATE table option to MySQL parser Allow DEFAULT CHARACTER SET without '=' (as produced by mysqldump) Fix drop indexes for uniq constraints Remove breakpoint Allow quote and other option to be passed to the producers. Fix false-diffing due to order of table options Fix suprious diff on cols with charsets/collates (MySQL) svk-commitdCGXq.tmp Only create views for mysql on v5 and up Removed source_db and target_db accessors from Diff (throwback to old version, only output_db is used) META.yaml is generated at build time and does not beling in the repo PgSQL diff patch from wries Since MANIFEST.SKIP is in svn, this file does not belong svk-commitTn2OH.tmp Fix tests! Move XMI stuff to branches/xmi since no one has worked on them and the tests have failed for years, and they've been skipped from dists for an equally long time Added semi-colon for (DROP|CREATE) TYPE statements in the Pg producer (wreis) Ben Faga (32): Applied Eric Just's changes to the Oracle producer. His comments follow: Added Class::MakeMethods to the requirements. Muffled a warning message, by changing '">&NULL"' (which I don't really understand) to '\*NULL' which is a nice pointer to the file handle opened in the line above. Fixed small error that was causing a test to fail. Removed the @_ from the import statement, since @_ contains the path(s) to the module and not subs to import. Added a couple things to the manifest skip file. New Manifest Added META.yml to manifest Added the header to the change file. Made the changes suggested by Michael Slattery. Made the column size undef if the data type is text of blob and made some other minor changes. Changed references to on_delete_do to on_delete and on_update_do to on_update. I tried to keep it backwards compatible. Made some changes suggested by Michael Slattery to fix table level comments. Also added in field level comments. Changed the second "my $h" to just "$h" because it was throwing warnings. Removed the line that declared YAML to be the producer. This stopped the YAML warnings from ./Build test and didn't seem to affect anything else. Applied changes submitted by Paul Makepeace <sourceforge....@paulm.com>. Applying the patch submitted with this bug report Fixed a problem with the trigger_steps where it used to expect a string when it was getting an array. Applied patch sent in by Daniel Westermann-Clark on Oct 11 2006. Added a semicolon at the end of the create trigger definition because SQLite seems to require it. Updated version number and Change log for recent patches. Applied Hilmar's patches. Committing patches sent by Florian Helmberger. Here are his remarks: Fixed a problem where the deferrable definition required to have a "not". Changed the name of unnamed foreign key constraints to TABLENAME_fk. Allows schema-qualified table names. Added fixes submitted by Peter Rabbitson: Added an 'alter sequence' line to the parser grammer which will simply skip over an alter sequence statement. Added a select section to the parser. This is to handle function calls that Applied patch submitted by Nathan Gray Support uppercase foreign key target-columns. Sent in by Daniel Boehringer Made change suggested by Daniel Böhringer to allow "ID" integer DEFAULT nextval(('"AlleStudien_ID_seq"'::text)::regclass), to parse properly Applied patches written by Nigel Metheringham. His notes follow. Brian Cassidy (1): Normalize Changes file to CPAN::Changes::Spec Brian O'Connor (30): The Turnkey package is based on the ClassDBI package (1.35) and includes functionality needed by the Changes for Turnkey Atom class creation. Right now DBI and Atom classes seem to be created correctly with the Turnkey producer. The Turnkey producer now uses the latest head on SQLFairy. There may still be some bugs but the autogeneration seems to work OK. Fixed problem with rendering atoms in Turnkey Producer. Fixed problem with Turnkey producer where the db connection string wasn't passes to the templates while generating the DBI output. Search string in DBI output was fixed to escape @_ in template. Added a method based to the template to correctly reformat module names to include cap chars after a '_' in the Turnkey producer embedded templates. Added a method based to the template to correctly reformat module names to include cap chars after a '_' in the Turnkey producer embedded templates. Fixed a problem for the Turnkey producer where the atomtemplate is not fully dereferencing the hash containing the field data. Fixed a possible infinite loop problem where the primary key accessor calls itself in the Turnkey producer. An update of the Turnkey producer to output with better css support. May be slightly broken, there've been lots of changes to this file lately for both layout and the schema object. Updated the Turnkey producer to correctly read the db values in the atom object layer. removed xlink tag Updates to the Turnkey templates for atoms, tt2 templates, and xml Updated a problem with the Turnkey template producer Fixes a bug in the way the Turnkey producer's macro.tt2 output renders minor focus panels. Added dumper method. Updated heredocs to not be interpreted. Fixed bug in Data::Dumper statement Updates for dump method. Removed some character escape problems with the Turnkey producer. Modified the Turnkey producer to correcly output URLs. This method will need to be updated once we start URL link corrections Updates to the Turnkey class bindings, will need to be revised later when we switch to search on it's own handler. Changes to the Turnkey producer related to soap output, AutoDBI output, and comment syntax. Updates to include special comments for Atom and Model classes to make Correctly produces settings file for soap Turnkey components. Changes to the Turnkey producer to better support IE. Modifications to the Turnkey producer to support file split in the Turnkey install process I needed to be able to pass in additional information to the templates (beyond the schema object) CVS2SVN (1): This commit was manufactured by cvs2svn to create branch 'darren-1_0'. Cedric Carree (5): Create unit test for ::Parser::DBI::PostgreSQL, fix parser namespace lookup Patch to get correct SQL data types from Postgres Fix index issue in Parser::DBI::PostgreSQL get Postgres table and column descriptions SQLT::Parser::PostgreSQL parses table def with default values Chris Hilton (82): A DBI Parser for SQL Server, mostly copied from DBI/Sybase A Parser for SQL Server, mostly copied from Sybase parser and geared toward working with SQLServer Producer output Added mapping of ODBC connections to DBI-SQLServer parser Added comments lines to test data forworking with new YAML Producer output Added collate and 'on update current_timestamp' field qualifiers Added character set, on update, and collate field qualifiers to field definition output Added to fields that do not require size argument Additional tests for new parser capabilities Added Data::Compare module to requirements list Added equals function for base equality testing Added equals function for equality testing Added case insensitivity option to get_table() Added additional tests for sqlt-diff A whole lot of changes, but major additions include adding diffs for table options and constraints Modified _compare_objects() to use ref_compare from Class::MakeMethods::Utility::Ref instead of Data::Compare. It seems more reliable and comes in a package we already require. Removed Data::Compare from requirements list Modfied fields() to return empty array when there are no fields Modified equals() to get scalar references to fields for comparison Fixed bug in equals() to exhaustively search sets of indices and constraints Fixed bug to exhaustively search for equal constraints and indices Slight adjustment to parsing of identity/auto-increment field qualifier Made sure defined scalar references are passed to _compare_objects(). Removed is_valid comparisons from equals(). Modified test text to reflect that fields() now return empty strings instead of undef when there are no fields If a DSN is defined but no 'from' db type given, we infer the 'from' type is DBI Re-arranged to put null/not null before default value; looks and parses better Removed '#' and '--' comments from being included in table comments; they aren't part of the schema and hose the schema diffing Removed auto_increment from a field's extra attribute; it's already saved in is_auto_increment and this prevents it unnecessarily showing up in the field's comment when producing MySQL Fixed case-insensitivity matching for SQL Server and field names It helps if you use the correct version of the case-insensitive field name you're trying to return Removing foreign key check from field equality; foreign key inequality is better caught with the constraint check Modified "create anything else" rule to not include table and index creates so any syntax errors in those are properly caught Added negative sign possibility to default number values. Use the overload::StrVal method to check if references are the same instead of the possibly overloaded stringify Modified equals() to include type check, stringify field checks, and more case insensitivity when required Modified equals() to include case insensitive data type checking and removed is_unique check (better caught by a constraint check) Modified equals() to include more case insensitivity when required Modified equals() to stringify field checks and use more case insensitivity when required Added some constraint tests Various tweaks to produce Oracle diffs Adding non-zero exit status if differences were found Parser changes to accomodate function-based indexes and optional size modifier for data type Fixed preservation of function call as field in function-based indices Minor refactor to avoid uninitialized value warning Fix field default value output bug I introduced Added 'default null' parsing Added ON DELETE and ON UPDATE clauses to FK output Fixed up ON DELETE parsing for FKs Removed conflict markers Added tests to catch Oracle problem with parser being reused Modified parse() to instantiate a separate RecDescent parser like other Parsers do Ignore Oracle system constraints on source DB schema, also Get SQLT version dynamically Remove row requirement for returning indices Added MySQL 5.0 DELIMITER functionality Added test for delimiter functionality and multiple comments on a line Fixed passing references to GetOptions Added comment to diff output if target database is not one of the few, the proud Added linefeeds so beginning of create view/procedure statements don't end up commented out Slight change to comment parsing to allow asterisks in comments Tweak previous comment parsing tweak for multiple lines Changed Constraint.equals() to ignore ordering of fields and reference fields Added options to sqlt-diff to ignore index and/or constraint name differences Don't test is_valid in equals() Skip constraint and index comparison shortcuts when ignoring the relevant names Corrected index parsing on Oracle so every index is not a unique constraint Added output-db option Made fields check in equals() ignore ordering Added output_db option Helps to actually pass the output-db option Added GO to output after views and procedures Added cursory parsing of procedures, functions, and views (oh my!) Removed mistakenly checked in debug code Hacks for comparing sql attribute in equals() to ignore unimportant differences Added diff comparison of views and procedures Small change to ordering of view and procedure diffs Removed unused rules procedure_body and view_body Added cursory parsing of view, functions, and procedures (oh why!) Added mysql-parser-version command-line option Added cursory parsing of views and procedures Added options for ignoring the differences in SQL for views and procedures Added command-line option for MySQL parser version arg Ignore views and procedures with no text (weeds out exatraneous results from SQL Server 2005) Chris Mungall (4): *** empty log message *** fixed parsing of Pg COMMENT ON ... syntax throws error if a comment is placed on a non-existent column new: makes tables in latex format Colin Newell (1): Simple change to make Postgres simple array types produce correctly Dagfinn Ilmari Mannsåker (130): Ignore editor droppings Mooify SQLT::Schema Mooify SQLT::Schema::Index Use 'isa' checks for attribute validation Mooify SQLT::Schema::Table Mooify SQLT::Schema::Procedure Mooify SQLT::Schema::View Mooify SQLT::Schema::Trigger Mooify SQLT::Schema::Constraint Mooify SQLT::Schema::Field Remove unused base class Filter undef from all constructor args Remove unused variables and module import Remove pointless DESTROY methods Use weak refs for schema object attributes Rename non-schema-specific roles to SQLT::Role::Foo Reinstate schema object base class Make SQLT::Role::Error internals closer to Class::Base Fix broken reset attempt Mooify SQL::Translator Factor list attributes into variant role Use quote_sub for trivial defaults Check Moo version at runtime Use quote_sub for trivial coercions Wrap some over-log has statements Document new roles, types and utility functions Add enum type Add Changes entries for mooification and leak fix Carp instead of dying if arguments are passed to read-only accessors Add carping wrapper to SQL::Translator->schema as well Make read-only SQLT::Schema::Table attributes carp instead of die Add missing List::MoreUtils dep Allow passing an arrayref to SQLT->filename Fix POD wording Use the API to access extra attributes Clean up properly after Parser::DBI::PostgreSQL tests Use accessor for table options in MySQL producer Don't reimplement perl's built-in default behaviour Fix typos in error messages Document append behaviour of options setters Merge branch 'patch-1' of Fix typo Remove documentation for field moved to role Add missing attribute name in header Fix typo in synopsis Fix incorrect module names in conditional plan Add myself to AUTHORS Fix confusingly erroneous variable name Fix Pg DBI parser test Add .mailmap entry for Ash Berlin Stop Makefile.PL hanging at prompts under Travis Install author-mode configure reqs in Travis Install DBI as well in Travis Install optional test deps in Travis Don't install Text::RecordParser, it's broken Add Changes entries for 6c77378 and 1fb4f40 Fix handling of views in MySQL DBI parser Install XML::Parser in Travis, it's used by t/05bgep-re.t Test Parser::DBI::PostgreSQL in Travis Set all the env variables in one "env" entry Switch t/13schema.t to done_testing Test Schema::Table->is_trivial_link and ->is_data Install Graph::Directed and GD to test Producer::Diagram Output the build log on cpanm failure Install the libgd development package Ignore Devel::Cover output Use a schema with FKs for diagram testing Fix and extend link table tests Add 'use warnings', test case-insensitive ->get_field Switch to @haarg's perl-travis-helper, add 5.20 and blead Use my fork of the helpers, for debugging Die instead of warning if roundtrip regen fails Actually install Devel::Cover Propagate switches when running perl in tests Add Changes entry for SQLite diff fix Add Changes entry for numeric field default fix Clean up Pg batch alter test Fix POD syntax for PODed-out code Document producer_args in SQL::Translator::Diff Fix argument documentation for preprocess_schema Factor out calling of normal diff-production functions Add Changes entry for Pg diff fix [merge] Batch alter support for Pg and refactoring release 0.11019 Fix test failure if Test::PostgreSQL is installed but not working release 0.11020 Skip HTML tests if CGI is not installed (RT#98027) Fix broken POD links found by App::PodLinkChecker Fix undef warnings from Text::ParseWords when running tests with -w Add IRC metadata and update repository and bugtracker URLs Update help/support and contributing POD section Fix JSON and YAML tests if the defaults have been tweaked (RT#98824) Switch back to upstream Travis helpers Install YAML and XML::LibXML before build-dist Install YAML and XML::LibXML in the perl used for testing too Switch back to my travis helpers fork Factor out quote option handling Clean up option parsing and identifier quoting in Producer::PostgreSQL Clean up option parsing and fix identifier quoting in Producer::MySQL Fix handling of quoted identifiers and strings in Parser::SQLite Fix handling of quoted identifiers and strings in Parser::MySQL Fix handling of quoted identifiers and strings in Parser::PostgreSQL Fix handling of quoted identifiers and strings in Parser::SQLServer Fix handling of quoted identifiers and strings in Parser::Oracle Escape the closing quote character when quoting indentifiers Test table and field names with quote characters in them Escape quotes in string values in producers Test round-tripping default values with quotes and backslashes Test round-tripping decimal default values Test quotes in field comments Quote table names in 'SHOW CREATE TABLE' in Parser::DBI::MySQL Add Changes entry for quoting fixes Merge branch 'quoting-fixes' Handle ALTER TABLE ... ADD CONSTRAINT in Parser::Oracle Add support for triggers in Parser::Oracle Narrow the scope of Oracle roundtrip TODO Merge branch 'oracle-fixes' Call close as a function rather than a method Remove executable bit from Parser/JSON.pm (RT#100532) Update the Free Software Foundation's address (RT#100531) Fix quoting of trigger name and table in Producer::PostgreSQL Remove redundant entries from Producer::PostrgeSQL's type mapping Fix clob type translation in Producer::PostgreSQL Translate MS Access memo type to text in Producer::PostgreSQL Fix SQLite diffing on perl 5.8.1 Add Changes entry for 5.8.1 SQLite diffing fix Fix multi-column indexes in Parser::DBI::PostgreSQL Switch back to upstream travis-perl-helpers Fix array types and multidimensional sizes in Parser::PostgreSQL release 0.11021 Daniel Ruoso (14): waiting for patch to be applied implements options in oracle indexes Fix weird bug, caused by a double evaluation in ternary if producer_args->{delay_constraints} can be used to add primary keys later Document delay_constraints producer_args option Define a name for pk constraint when delay_constraints is on Declares the new tests in the MANIFEST, remove useless warnings Test for oracle alter_field alter_field implemented. alter_field test Pass. Including new test and test data into MANIFEST Fix ORA-01442: column to be modified to NOT NULL is already NOT NULL Small fix in delay_constraints (missing ;). Adding tests for Oracle->add_field Implemented add_field, only the field is added, nothing more for now. Darren Chamberlain (147): Many, many changes. Changed the basic assumptions about the module. Reverted to a version 1.1, due to botched branch attempt. Another attempt to check in a branch. Updated to work with my updated API. Added test data in groovy hierarchical directories. Added copyright notices to top of files. Added MANIFEST, MANIFEST.skip, and Makefile.PL Added files. Added note of a bug Changed many assumptions about the test. Updated docs, especially detailed internal API docs. Changelog file. Automatically generated by cvs2cl.pl Changed some of the basic assumptions. This was a synmail test. syncmail test. Automatically generated by cvs2cl.pl Added Pod::Usage as a prerequisite Modified POD to include a complex description of the format of the data structure returned by parse. Changed name of translate method to produce, to be consistant with Producer API. Added __END__ token. Updated $VERSION to be CPAN-compliant. Broke the 1 test out into 11 different tests, each one of which tests a specific part of the data structure returned by parse. Turned off SQL::Translator::DEBUG. Removed warns and debugging, so this test will actually pass when run as part of make test. Removed comment lines (the parser chokes on these, I think). Added some basic files, removed unused data file (the contents were moved into the test that used the data). Updated filelist Re-added Accidentally PREREQ_PM'ed XML::Writer instead fo XML::Dumper Test changes Removed in anticipation of a merge. Merged changes from darren-1_0 tag into HEAD branch (hopefully!). Automatically generated by cvs2cl.pl Added 'order' to data structure description Added test structure. Added extra files to MANIFEST.skip. Updated MANIFEST. Added CSV parser and a test. Added support for producer_args and parser_args. Added MySQL producer (still in a pretty alpha stage, only barely functional). Added generation of PRIMARY KEY and KEY clauses to CREATE statements. Fixed some typos, added some basic re-logicing (is that even a word?) Shitload of changes. Still passes all tests, such as they are. Subclasses Class::Base. Removed error_out, error, in favor Class::Base::error. Changed error_out usage to error Added list_parsers and list_producers methods, in response to <Pine.LNX.4.44.0211211124100.4042-100000@localhost.localdomain> Added some comments (comments?) Updated an example to make it happier. More generic clean macro Moved MANIFEST.skip to MANIFEST.SKIP MANIFEST.SKIP takes a regex, not a list. Some of the .pm files weren't in the MANIFESt. Documentation fixes; added Chris' name to copyright notice; updated copyright year. Added SQL::Translator::Producer::Raw to MANIFEST. Removed extra unused junk. Set $DEBUG to 0 by default. Moved all POD to the end of the module, to make it easier Did you forget what year it is, Ken? Added new files to MANIFEST. Added t/08postgres-to-mysql.t Some doc changes; added Allen to AUTHORS section o Added bin/auto-dia.pl to scripts list A README, which is required by CPAN. Added Utils package with debug method, shared between MySQL and SQLite producers. Added SQLite producer and Utils. - load now sets $ERROR on failure. Hey, this could never have worked as advertised. *blush*. Why make these globals? And again, with the globals. Yeesh. Added stub test. Added missing stuff. Remember folks, anything not in MANIFEST will not be part of a distribution! Added Spreadsheet::ParseExcel Moving tests to Test::More Updated README to reflect changes to SQL/Translator.pm POD. Let's check before we assume this is a ref, eh? Added normalize_name function, which normalizes names. Primarily needed by the Excel parser. Attempt to be more robust.lib/SQL/Translator/Validator.pm Forgot to add this yesterday. Added header_comment function; see docs for details. Added refactored comment producing using header_comment. Test file for header_comment function from SQL::Translator::Utils. Added Schema and some more dependencies Doc changes (use C<> instead of B<>) Updated XML test. Uses XML::Writer instead of aggregate() and a global. Updated README via perldoc -t lib/SQL/Translator.pm Trim whitespace from arrayref elements as well as array elements; see <pine.lnx.4.50.0305121004300.32235-100...@oakhill.homeip.net>. Slightly more paranoid version of parse_list_arg -- check length as well as definedness. Initial import of Jason Williams' SQL Fairy logo. The $DEFAULT_SUB was still looking at $_[1], and not $_[0]->schema Added "use Class::Base;" declaration to go with use base statement. Some simple cleanups. .cvsignore Oops, haven't added these yet. Addressed Ken's concerns -- versions remain .02f floats. Made Test::Differences mandatory, since tests fail without it. Added the new files Added some email addresses. Typo alert! PODified smelto's email address. Fixed pod oopsie Updated MANIFEST. Forgot to add my new test, and made the use of CGI vs. CGI::Pretty more consistent. Typo fix. Tabs suck! Die 0x09, die! Removed definedness check for file-based data, for the DBI-based parsers, which will not pass a data or file element. SQL::Translator::Shell - and interactive Schema browser. This is a placeholder commit. Command-line wrapper for SQL::Translator::Shell. Basic test for SQL::Translator::Shell Added YAML parser, producer, and basic test. All need more work! Added YAML stuff A better YAML producer, actually using the YAML module. Updated test Work in filter mode -- accept data on standard input. Context fix. Rar. Made $0 a little nicer. Added index view. Dunno if it works. :( Slightly more verbose error reporting. Small pod fix. Almost all elements have consistently named classes, rather than HTML producer test. Uses HTML::Parser. Sorted. Added a few missing items. Updated YAML test. Brought the test up to date with some recent changes. Added a libscan to supplement the default libscan. sqlt-diff test (very basic) Bring tests to passing. Added explicit unlink of the temp file; it wasn't always being deleted. quotemeta for directory names. See. Stifle undefined variable warnings. Fixed 'useless use of constant in void context' warning. Robustify the tests a little. More TODO items. Reverse order of the Changes file, so that recent Changes are at the top. More TODOs Remove all those subroutine redefined warnings. Just the fairy. Integrate Dave Cash's changes. Added blibdirs, which recent MakeMakers create. Make SIGN conditional based on MakeMaker version. POD update. Will someone just frickin' fix XML::Writer already? Added maybe_plan function. Update tests to use maybe_plan. Removed 01test.t Readding, at kyc's request. DOAP for SQL::Translator Original artwor, courtesy of Jason Williams Moved to htdocs module Removed doap.rdf from MANIFEST, since I removed it from CVS a few days earlier. Merging patch from ddascalescu+p...@gmail.com, for. I've not tested it. Revert last change becuase it broke a bunch of tests. David Steinbrunner (28): typo fix typo fix typo fix typo fix typo fixes typo fix typo fixes typo fix typo fixes typo fix typo fix typo fix typo fix typo fix typo fix typo fix typo fix typo fixes typo fix typo fixes typo fix typo fix typo fixes typo fixes more typo fixes typo fixes typo fixes typo fixes Devin Austin (1): added kaitlyn's patch for mysql->sqlite translation Earl Cahill (3): initial adds for the oracle dbi parser and a simple test to make sure the use works adding in Oracle driver support and making the DRIVER keys alphabetical :) -O::Is, accidentally committed some debug, thanks to Darren Chamberlain [d...@sevenroot.org] for pointing it out Fabien Wernli (12): test I'm not committing to trunk instead initial things don't look at this yet text -> varchar2(4000) instead of clob, get rid of reserved keywords, import oracle_version from DBD::Oracle missed some quotes and added tests to avoid that missed some quotes and added tests to avoid that double and float thou shall now be float fix committed tests text back to clob I'm not to be trusted with the chainsaw add to changes float doesn't need to be forced to 126 either Fabrice Gabolde (1): Support for SET NULL, SET DEFAULT and NO ACTION in foreign key clauses for SQLite. Florian Ragwitz (1): PostGIS functions are case-sensitive Gavin Shelley (1): TTSchema doc fixes Geistteufel (2): quote reference_table add quoted reference to check if the table name contain a full declaration, it quote it properly Gudmundur A. Thorisson (1): Changed max_id_length to 62, from previous 30, since PostgreSQL now allows Jaime Soriano (2): Bit size can range from 1 to 64, test added for size greater than one sqlt-diff arguments parsing reimplemented using Getopt Jaime Soriano Pastor (6): Default bits and double quoted strings are parsed now MySQL parsing fails if a table is defined more than once in the same file, if not, indices are messed up Names accepted (and ignored) as types of primary keys in create tables Name of unique keys are not written if empty Integer default sizes are one point smaller if they are unsigned sqlt-diff option to quote names Jess Robinson (86): New column_info definition, correct nullable Add Triggers DB2 P::RD grammar DB2.pm the first Drop functions produce DB2 tables Initial devision of sqlt-diff: Most of the functionality into a module Use DB2 Parser DB2 producer tests More DB2 producing Producers can now return individual statements as s list, if wantarray set Added quote-field-names, quote-table-names options Recognise & skip DROP statements when parsing Document producer methods Produce either a list of statements or a string Update Trigger to insist on a valid table for on_table Add tests for new alter_field, add_field, drop_field methods Typo fixing SQLite and YAML tests broken, fixed Allow skipped insert statements and trigger bodies to contain quoted semi-colons Add test to prove insert statements with quoted semi-colons can still be skipped Fix mysql_table_type .. (where did this go, or why was it not there?) Improvements to MySQL producers foreign key and comment handling Lots of Pg Producer tests, some other fixes Splitting of MySQL, Postgres and SQLite producers into sub methods Move Template into reccommends for now Changes for 0.08_01 Build files for 0.08_01 Add timestamp tests, make postgres produce timestamp(0) if asked Add quoting support to the mysql producer, thanks ash! Fixed up tests to accommodate new DB2 producer changes and sqlite tests to not test for comments. Default auto-inc fields to start at one, and increase by one Support Pg timestamp type followed by an optional size, which denotes the type of timestamp Fix typo, no_comments arg wasnt getting passed. Thanks Penguin! Also collect function objects. Thanks Gordon! Make TTSchema work with TT 2.15, somehow Remove strange manifest entry Add default timestamp support to Postgres producer Make the DROP commands a separate item in arrary context (mysql producer) Fixed tests for mysql producer changes Upped version to _03 Update TODO file Cascade drop pg tables to overcome key constraints when dropping Update changes, oops Patched mysql producer to name constraints sanely Fix bug in YAML test Fix to not test while TT is broken Ignore all TT test while TT is broken 0.0899_01 diffing fixes Only run test if Graph::Directed installed Update oracle producer, patch from plu. Fix syntax for index dropping in sqlite producer Oops, fix tests for fixed syntax Update changes/version for 0.0899_02 Add DBI dep (oops) Update changes with latest release date 0.09000 not 0.0900 (cpan gets confused if numbers go downwards) Added support for proper enums under pg (as of 8.3), with pg version check, and deferrable constraints Updated change file Add support for proper boolean fields in the mysql producer, as of v4.x Applied patch from Ryan to uniqify index names sanely for the mysql producer Make Schema::Graph only load if "as_graph" is called on a Schema object Patch from ribasushi: Correctly graph self-referential constraints Added patch from groditi adding SET type support to the mysql producer Add views to mysql producer, thanks groditi Added patch from groditi to support views in sqlite Added patch from wreis, view support for pg producer Add target_db to diff test as it was producing warnings.. Enormous patch from Peter Rabbitson making mysql version parsing saner and adding tests for it. Updated authors/Changes Update mysql producer test to saner field names, Peter R. Pg views and sqlite views, patch from wreis By royal decree, produced statements in list context shall not end in a semi-colon, or any newlines! (They may contain newlines) Skip on newer Spreadsheet::ParseExcel lukes' patch: drop if exists under sqlite 3.3+ Now supporting scalar refs as default values! (rjbs) Missed file from default-value-improvements commit Patches for/with jgoulah: Default sqlite_version so we dont get uninitialised errors when calling from ::Diff Patch from rbo to support multiple database events per trigger Patch from jgoulah for mysqls UNION (merge engine) option Support for temporary tables in Pg, from nachos Changes + Reverts for 0.11000, see Changes file for info Released 0.11007 Update to version 0.011009 Add geiststeufel to the AUTHORs list Johannes Plunien (17): delayed adding semicolon in oracle producer using unreserved table name for drop statements in oracle producer ensure to not exceed max allowed size for oracle data types add semicolon to CREATE TRIGGER after END which i have removed before accidentally using unreserved table name for FK alter statements in oracle producer updated Changes triggers may NOT end with a semicolon If wantarray is set we have to omit the last "/" in this statement so it can be executed by DBI->do() directly. added notes about changed behaviour when calling oracle producer in array/scalar context Skip tests for buggy Spreadsheet::ParseExcel versions (rbo) fixed 51-xml-to-oracle.t fixed *old regex, added *tar.gz regex in MANIFEST.SKIP removed semicolon from CREATE VIEW in oracle producer added release-date to Changes Translate bytea to BLOB in MySQL producer. This fixes a few tests of DBIC if you run the full test-suite against MySQL instead of SQLite New method to clear out extra attributes MySQL producer skips length attribute for columns which do not support that attribute. Currently following column types are added to that list: date time timestamp datetime year John Goulah (6): add support for a skip option to the parser added ignore_opts parser arg to ignore table options add param to _apply_default_value so that certain values can output without quotes needed to tighten up regex added in last commit reverting r1413 and r1414 in favor of passing a scalar ref to parser which the producer outputs correctly without quotes added parser support for MySQL default values with a single quote John Napiorkowski (1): fix for when we are adding /dropping columns in sqlite and need to roundtrip via a temp table Jon Jensen (1): Add JSON parser and producer modules Jonathan C. Otsuka (4): Adding DECIMAL_DIGITS to SQLServer size field for scale info: Revert "Adding DECIMAL_DIGITS to SQLServer size field for scale info:" Quote table_name to fix tables using reserve words as their name Add DECIMAL_DIGITS to SQLServer size field for scale info Jonathan Otsuka (1): Honor supplied field order when adding fields to a table object Jonathan Yu (21): - Updated GraphViz producer module per the modifications discussed on the sqlfairy-developers mailing list () - Minor documentation changes. Namely, noted that the index types are stored internally as uppercase; this is the only way to ensure the Producer modules still work properly. Since Oracle understands a double precision floating point type, I added "double" to the ora_data_type file. - Added a show_index_name parameter which determines whether index names should be shown. If false, then the module will just print a list of tuples of indexed fields. - Fixes a bug where _apply_default_value is not found (due to it being part of SQL::Translator::Producer); does this by simply making this PostgreSQL producer a subclass of the SQLT::Producer base. - Add support for 'extended' friendly ints (the nonstandard extensions provided by MySQL) - Fixed POD producer - Added some stuff to MANIFEST.SKIP - Some minor cosmetic changes - Added myself to the AUTHORS file - Added myself to the SQL::Translator file as an author - Merged some changes by myself, and also from ribasushi - Updated copyright, added myself to contributors list Added a Module::Build::Compat 'passthrough' Makefile.PL - Removed use of $Revision$ SVN keyword to generate VERSION variables; now sub-modules are unversioned. Changed show_index_name to show_index_names to make it better match the other options Reverted VERSION so it was no longer dependent on $Revision$, the semantics of which are different under SVN Applied patch to switch dependency on XML::XPath to XML::LibXML (Closes: RT#32130) fix a bunch of spelling errors, and add whatis entries (these are applying patches from Debian) Remove copyright headers from individual scripts Revert my previous changes (rev 1722 reverted back to rev 1721) Justin Hunter (1): address some issues with Pg producer and quoting Ken Youens-Clark (714): Initial checkin. Fixed a lot of little things in modules, docs, etc. Bugs in sql_translator.pl. Added PostgreSQL producer. Rolled in Darren's new list_[producers|parsers], lots of cosmetic changes, Fixed spelling of "indices" in various files, finished adding all of Tim Added "show_warnings" and "add_drop_table" options to sql_translator.pl and Added a rule to MySQL parser to disregard "DROP...;" statements, filled out Fixed a bug in Oracle producer that allowed for identifiers longer than the Fixed problem with truncating an identifier when it was exactly the Fixed bug where it was truncating table name needlessly. Added "Raw" to be able to get to raw parser output. Added "auto-dia.pl" script for generating ER diagrams. Added fulltext index. Made it better. Lots o' bug fixes. Added "join-pk-only" option. A working PG parser! Mods to handles FK references. Added some documentation to PG and MySQL; the "eofile" rule to MySQL. Added production to field rule to handle embedded comments. Deleted "index" rules, allowed fore and aft comments in fields and Added more rule (alter table) to be able handle Chado schema. Handle "on [delete|update] action" differently Some minor mods to POD. Added SQLite producer, basic knock-off of MySQL producer, made some mods Added code to kill field qualifiers in index field declarations. Added font options, made default font size 'small' instead of 'tiny.' Added mark for unique constraint and legend to explain extra markings. Added color option. Added options for natual joins only, made code work with proper FK Got foreign key references basically working now. Added grammar for "REFERENCES" (foreign keys). Shortened "natural-join-fk-only" option to "natural-join-fk," Adding "auto-graph.pl" to automatically create graphs (via GraphViz) from Some syntax fixes, package name was wrong, added Mikey's name to AUTHORS. Hey, new Oracle parser! Small fix. Added Oracle parser to MANIFEST. Fixes to help with Oracle data types, also fixes with table constraints. Fixed error for: "Use of uninitialized value in string eq at Moved all the real code into a module so this script now just uses the new Minor cosmetic changes. Adding new GraphViz producer. Cosmetic changes to keep the coding style consistent. Moved most of the code into a new "Diagram" producer. Adding new ER diagramming producer. Fixed bug (illegal div by 0) if "no_columns" wasn't numeric, also fixed Adding new CGI script front-end for GraphViz and Diagram producers. Added defaults to arguments. Added new files. Fixed error that was preventing MySQL parser from working with Adding new schema test, commiting fixes to MySQL parser test. Adding new objects for handing schema data. Not being used while I work More changes to getting the schema to a working state. Not much to say ... just trying to get this working. Using some of the rules from the PG grammar to make mine better, cleaned Trying to add tests as I write methods, so lots of new stuff to mirror Adding a new PG parser test. Added the requirement of Parse::RecDescent 1.94 or later, added Fixed an error in default value regex that disallowed a value like "00:18:00". For some reason, "t.pl" was still in there. Fixed error '"my" variable $wb_count masks earlier declaration in same scope at blib/lib/SQL/Translator/Parser/Excel.pm line 68.' Fixed error 'Use of uninitialized value in string eq at blib/lib/SQL/Translator/Producer/MySQL.pm line 164.' Fixed error 'Use of uninitialized value in repeat (x) at blib/lib/SQL/Translator/Producer/XML.pm line 110.' "size" of a field needs to be an arrayref as it could be two numbers (e.g., Changed to use Test::More, cleaned up syntax. Still pretty useless. Fixed error 'Use of uninitialized value in pattern match (m//) at blib/lib/SQL/Translator/Schema/Field.pm line 144.' Too many changes to mention. Too many changes. More changes to keep up with code. Added mods to pass parser_args for xSV parser. Updated to use Text::RecordParser and added scanning of fields, more Updated tests to match new code. Added Text::RecordParser 0.02 pre-req. Minor fixes to primary_key method. *** empty log message *** Passing schema object now as third argument. Made "order" a property of the table and view objects, use a Schwatzian Added oft-used "parse_list_arg" sub for Schema classes. Playing around with new schema object. Added passing of schema arg. Removed warning. Added "match_type," use "parse_list_arg," added DESTROY. Playing with constants. Added use of "parse_list_arg," changed "nullable" method to "is_nullable" Use "parse_list_args," added "options" (still vague on this), set a default Use "parse_list_arg," put field order into field object, added "order" Use "parse_list_args," added "fields" method, changed validation, break Lots of changes to reflect library mods. Lots of changes to fix merge. Modified to call translator to get schema rather than passing. Don't pass schema, let others call for it. Since "true" is the default for trimming and scanning fields for the xSV Change to avoid warning of "use of unitialized value." Added Sybase producer to MANIFEST. Cosmetic change in POD. Addressed a few style issues to make it like the other producers (use Changed grammar to bring it more inline with the official MySQL YACC Added more tests. Fixed bug with initialization. Added default field sizes for numeric fields if not specified, removed More work on default field sizes for numerics. Added a few more tests. Added rules to catch common (but useless) statements. Added ability to manipulate height, width, and whether to show the field Added options for height, width, and showing field names. Added better options for accepting height and width, changed default node Changes to grammar to clean up, moved primary key defs and unique keys Fixed parsing of field size for float values. Moved some code around to get methods in alphabetical order. Moved some code around to fix ordering, convert "type" to match what's Added parsing of default value on init, added "extra" method for misc field Moved some code around, fixed some POD, added checking of existing Fixed up some POD. Changed tests to use the Schema objects instead of looking at the data Added tests for $field->extra. Changed tests to use Schema objects instead of data structure, added more The test schema actually had incorrect syntax, so I fixed that; changed Now that the PG parser is using the Schema object, a previously uncaught Changed $table->primary_key not to return an error if there is no PK, Added a lot more tests, now using the Schema object. Added new Oracle parser test. Added a better quote; quit putting FKs at field level (only at table); fixed Quit putting PK defs as indices, cosmetic changes to grammar, remove quotes Cleaned up code (mostly cosmetic), added normalization of column name, Removed Sybase parser because it's complete broken. When this works, we can Changed "FULLTEXT." Changed constant to a hash to avoid silly Perl errors about it being Changed constant to a hash to avoid silly Perl errors about it being Added parsing of comments on init, added "comments" method. Added comments method and parsing on init. Added sorting of tables, other cosmetic changes. Fixed test numbers, removed unnecessary code. Fixed test numbers. General clean up to make it more like other tests. General clean up to make it more like the others. Added "use strict;" General mods to make it like others. Added Oracle parser test. Removed unnecessary backslash-escapes of single quotes, reformatted spacing Changed to use schema objects. Cleaned up "translate" hash a bit, changed to use schema objects now, Changed to use schema API. General cleanup, changed to use schema API. Expanded "translate" hash, changed to use schema API. Some cosmetic changes, changed to use schema API. Removing "Raw" producer as it's unnecessary now. Removed "Raw" producer. Small change to comment. Small changes to comments and size methods. Added "alter" to be able to parse output of Oracle producer, other small changes. Minor change to affect context. Removed debugging warning. Added rule to catch a default value given just as "null." Added "is_unique" method to determine if a field has a UNIQUE index. Added "make_natural_joins." Some bug fixen. Changed to use schema, refactored duplicated code (also in GraphViz) up Changed to use schema API. A POD producer. Minor changes. Adding new HTML producer. Mostly cosmetic changes (Allen -- no tabs, indent = 4 spaces!), got rid of Added "is_valid" tests. These tests relied on now deprecated action that the raw data structure Added validation code. Modified all filed to quit returning the data structure, now only return "1" Modified producers to quite looking for the data structure to be sent as Added HTML and POD producers. Updated TODO. Removed fixed bugs, either need to verify other bugs exist (and fix) or Upped the version in anticipation of making a new release soon, removed Altered POD description. Changed getting of version from main module, added exe file. Removed some things that don't actually work. Removed Validator class as validation is now in the Schema object. Renamed auto-dia.pl to sqlt-diagram.pl Renamed auto-graph.pl to sqlt-graph.pl Renamed auto-viv.cgi to sql_translator.cgi Fixed script name in docs. Fixed script name in POD. Removed this file as it uses the Validator which has been removed. Fixed EXE_FILES filenames, decided to removed CGI script. Added some ideas for 0.03. Added LICENSE. All the copyright notices say how the user should have received a copy of I was going to move the "format_*_name" methods to the ClassDBI producer, Created a more generic README for the project. Regenerated Chagnes using cvs2cl.pl Renamed 09auto-dia.t to 09sqlt-diagram.t to match the move in script filenames Fixed MANIFEST to match change in filename. Added INSERT and UPDATE placeholders to get parser to not barf on those. Nothing really changed. Updated README. Decided against using cvs2cl.pl as it's way too verbose, using a simpler Added SEE ALSO to send people to SF site. Cleaning up the project description. Trying to get everything "on message." The grammar bothered me. Added more TODO items. Fixed VERSION strings. Fixed VERSION string. Fixed VERSION. Added single quotes around the "use base 'foo';" line. Fixed grammar for REVOKE and GRANT (missing word "table"). Allow data types which haven't been listed in translation table to pass Adding dumper creator. Changed "hasa" to "has_a." Added more description; allow options for db user/pass/dsn, skipping certain Allow translation from parsers other than MySQL, Pg, and Oracle and just Attempting clean up something. Fixed a bug-fix of mine that was contingent on my checking in another module. Added "()" to character class of "default_val" rule to allow "now()" to Removed redeclaration of "$parser_type," fixed bug in DSN, removed "s" on Minor change. I've tried to address Allen's concerns about naming of FKs, but still haven't Efforts to re-enable Allen's many-to-many linktable code. I have no idea Reversed earlier change to VERSION after Darren scolded me. Fixed VERSION. Fixed VERSION to be CPAN-friendly. Strip field size qualifiers from index fields as SQLite doesn't like. Made "pg_data_type" rules case-insensitive per a patch from Richard Added options to make an image map. Added a "table" section at the top to click right to a particular table. Print out field comments using Oracle "comment on field" syntax. Allow embedded comments a la the PG parser, store the comments; also strip Added options for specifying image map. Added "character set" as field qualifier as this is part of MySQL 4 output. Applying (spirit of) patch from RT making keyword "table" optional in New MySQL 4 syntax allows field names to be in backticks (and this is the Added line to disable checking of FKs on import of tables. Reversed the arrowheads. Reversed arrowheads. Allow more producers than just the two graphical. FK defs were leaving out the field name. Added dependencies from new XML modules. Changed to use new "SqlfXML" producer. Fixing bugs as reported by S. Quinney in RT ticket. Added "set" rule, remove extra space after comment character. Changed to look at different constraints as NOT NULLs are now allowed as Changes to make case-insensitive rules, fixed spelling error of "deferrable," Fixed spelling error of "deferrable." Added logic to ensure the PK fields are not nullable (thanks to S. Quinney). Ran through perltidy, broke long lines, changed logic on setting of More cosmetic changes to make comments pretty, added ability to set Added "reset" method, check for existing Schema before called parser again. Fixed bug where "layout" wasn't being passed to producer. Adding acceptance of a "title" arg. Changed default value rule slightly to allow the empty string. Added "integer" conversion. Adding a few rules so that "make clean" isn't necessary before "make manifest." Fixed bug in timestamp trigger syntax. Expanded default value rule to allow a bare word like "current_timestamp," Use of "map," emit warning when changing PK from CLOB to VARCHAR2. Minor cosmetic changes. Breaking a long line. Adding AUTHORS file per suggestion from Darren. Removed Sybase parser from MANIFEST, update TODO. Update of README. Added Paul and Jason, more e-mail addresses. Adding some Sybase schema to parse, though it's from a pretty old version Adding Sybase parser test. Replacing contents with Paul's output from dbschema.pl 2.4.2. Added setting of field size for *text fields. Did I get them right? Changed "tinytext" to convert to "varchar2," undef field size when the Added code to print out table comments. Added code to catch table comments (simply assumes that all comments before Almost functional. Mostly functional. Added Sybase parser back in. Added test number. Working changes I made to 1.31 back in: Many cosmetic changes to make Some changes to "comments" method. Some minor bug fixen, additions. All cosmetic changes while I was looking to find "uninitialized values," Making sure that all vars are initialized to get rid of silly warnings. Fixed bug where the "is_auto_increment" flag wasn't being set for serial Added test to make sure that serial field has 1 for "is_auto_increment." Changed test to match what "comments" now returns. Changed test to match what's returned for the size of a "text" field. Added boilerplate header, some cosmetic/POD changes. Added small transform to turn "XML-SQLFairy" into "XML::SQLFairy" so that Added "template" option for TTSchema produce, added docs for "pretty" and Started trying to think of all the changes for 0.03. Created a separate rule for "text" datatype and set the "size" to "64,000"; Removed "use warnings" to make 5.00503-friendly. Bug fixes prompted by Gail Binkley: Fixed up POD, some other cosmetics changes, removed "use warnings" to make Added a little to POD, some other simple changes to make more like other Fixed POD. Adding former "SqlfXML.pm" as "XML/SQLFairy.pm." Moved to "XML/SQLFairy.pm." Adding it back and making it an alias to XML::SQLFairy. Corrections. Moved XML producer. Increased field sizes to match what PG parser is storing now. Changed the default sizes to be the width of the field, not the byte size. Got rid of "our," changed field sizes. Changed field size. Got rid of "our." Modified to use XML::SQLFairy, changed field size to match MySQL parser. Changed to use XML::SQLFairy. Added Ying. Changed default layout type. Some POD changes. Chagned default. Changed default layout. Added tests for check constraint. Added test for check constraint. Some cosmetic changes, some stuff dealing with check constraints, make Wasn't adding check constraints! Fixed. Now with check constraints! Some special logic to handle check constraints. Some special logic to better handle check constraints (which don't have Expanded on AUTHORS, added dumping of Schema object if debugging, added Added link to CPAN ratings. Fixed problem with "deep recursion." Small change. Moving to XML/SQLFairy.pm. Adding old SqlfXML producer as XML/SQLFairy. Cosmetic changes. Changed the way "_list" works to use File::Find so it can go deeper to find Adding XML and XML-SQLFairy producers. Changes to quit using "SqlfXML." Got rid of "SqlfXML." One day we'll make this the 0.03 release. Added "META.yml" as newest version of ExtUtils::MakeMaker will add this OK, I guess it's not necessary to explicitly add "META.yml" to the MANIFEST Corrected the docs a bit (no more data structure to pass around!), but Fixing docs. No real changes. Moving files around, removing ".pl" suffixes, now all start with "sqlt." Moved POD around, added all options for all parsers and non-graphical Fixed up POD, allow for other ways to specify db. Adding new files. Changes. Added "title" arg. Added default of PAargs. Fixed drop table rule. Updated with new bin file names. Used to be "sql_translator.cgi." Alias to XML::SQLFairy. Fixed script name. Added drop table rule, saving of table comments. No longer calling script "sql_translator.pl." Fixed EXE_FILES filenames. Fixed docs. Fixed bug in comment rule. Added test for table comment as SQL comment. Added rule for "create index" and fine tuned "default_value" rule. Added some Oracle datatypes so you can translate Oracle-to-Oracle; better The arg is called "output_type," not "image_type." Trying to expand to cover all args to parsers and producers; NEED HELP Changed POD to match class name, added link to Turnkey project page. Removed backslash-escaping of single quotes where unnecessary, ran new PG was choking on a lot of small errors, and some of the translations didn't Fixed problems in foreign key rule. Allow space as part of default value. Added alternate form of comments "/* ... */." Added tests for alternate form of comments. Added a comment for a test. Wasn't producing field comments. Checking for field comments now. Trying to correct header for HTML. Added some rules to better handle the output of DDL::Oracle, now saving Removed unnecessary code. Fixed "parse_list_args" to not stringify reference arguments. No need to create constraint names if they don't already exist (and PG Push foreign key defs to end of file to avoid errors with referenced tables Added comment for FKs (if necessary). Kill size attribute if field data type is "text." New SQLite parser. Adding new DBI and experimental DBI::SQLite parsers. Added options for DBI parser. Streamlined. Adding a MySQL/DBI parser that talks to the db for the structures. Fixing AUTHORS. Some mods. Adding Sybase parser for Paul. Fixed syntax error. Adding new Sybase entries -- should the full "use" statement be used or Added "add_trigger" method. New trigger class. Tests for triggers. Now parsing and adding views and triggers to schema. Don't make up unnecessary monikers, added support for table options that Lots o' refinements. Disabling some suspect code. No longer require file arg here -- let modules decide. Fixed a problem with POD. Adding Paul's Procedure class. Fixed POD, added "schema" method, fixed DESTROY. Added "schema" method, added to object initialization and destruction. Fixed spacing. Added support for adding procedures, getting procedures and triggers. Added tests for procedures, expanded view and trigger tests. Adding SQLite parser test. New test data for SQLite parser. Fixed bug that wasn't maintaining table order. Basic tests for SQLite, will expand later. Fixed syntax errors. Fleshing out. No reason to see Excel parser. Needed semicolon on end of constraint def. Small change to prevent "use of uninitialized value" warnings. POD changes, small change to prevent "use of uninitialized value" warning. Fixed normalize_name per DLC. Fix PK info if available. POD fix. Trying to be smarter about intuiting field types and sizes. POD fixes. Fixing up "view_field" to use all the field attributes. Adding a DBI-Pg parser based on DBIx::DBSchema, but I'm not sure if I'm Added PG support, more POD. POD fixes, fix to field size for a float. Fix to size of float field. Minor changes. Fixed some spacing and POD. Fixing POD. Added boilerplate intro, fixed POD. Fixed POD, removed commented warnings. Added boilerplate intro/copyright. Doc fixes. Mostly POD fixes. Small POD fix. Added some more POD. Notes about DDL::Oracle. Fixes to POD, mention DDL::Oracle. POD fixes. POD fixes, removed some unnecessary code. POD fixes. Trying to get sqlt to play nicely with DBI parsers which need no input file. Added a little to the POD to explain version dependency. Be stricter about no comments. Adding new schema differ. Corrections. No reason to distribute logo. Removed "use Data::Dump" as it wasn't being used and makes tests fail if Not ready to distribute this script yet. Keeping up with changes in code. Some refinements in assigning field types, size of field for floats. Getting float sizes to work properly. POD changes. Fixing default datatype if unknown. Upped number of constraints. Changed a little to make more interesting. Keeping up with changes in module. Update number of tests. Added YAML. Fixing error ""my" variable %format_X_name masks earlier declaration in same scope at t/20format_X_name.t line 11." Added "order" to tables (should have been there already, but was also causing Added missing data file. Added another missing test data file. Added map to overcome "Use of uninitialized value in join or string at Corrections after adding scan fields. Fixen. If no data is present, then assign a field size of "1" and not "0" (as the Changing test to keep up with module. Some of the same changes to Excel. Updated for release. Updating for 0.03 release. Updating for release. Small fix. Updated changes. Fixed typo in POD. Making a more thorough check of things. Pushing up revision so make CPAN happy. Increased VERSION to 0.04 to make a fix-up release. Indicating changes for 0.04. Hardcoding PREREQ_PM, removing code that set it dynamically. Changed tempfile stuff to use "mktemp" to get rid of warnings. Initializing variable to suppress silly warnings. Skip creating package if no name (must have been an error in the source file Added more rules to handle all variations of the "ALTER TABLE" statement Added tests to keep up with new ALTER TABLE rules in the grammar. Fixed number of tests. Minor changes. Fixed errors in grammar as reported in "." Changed default_val rule according to bug report on RT.cpan.org. Fixed constraints and indices. Null out size if a blob field. Various bug fixen. Wasn't producing constraint defs, fixed indices to be a list. Updated. Small fix. Try to make usage clearer. Changed index creation to better handle output of DDL::Oracle. Fixed test. Outline for manual. Added reference to manual, fixed copyright. Being case-insensitive on datatype translations. Added manual. Changed to print out schema mutation statements (CREATE/ALTER/DROP) instead sqlt-diff is now functional enough to include. Updated. Added Dave Cash for his work on GraphViz producer. Changed copyright. Fixed copyrights. Updated. Fixed copyright. Fixed copyrights. Fixed copyrights. Fixed copyrights. Minor bug fixes ("top" and table links didn't work), added borders to tables, Pushed field attributes to after datatype/size to make consistent with Fixing up new options for showing data types, etc. Added some debugging comments. Applied patch to pass new options to producer. Added rules (REM, prompt) to help parse output of DDL::Oracle; changed Upped $VERSION to 0.05, created a "version" sub to easily get this. Misplaced semi. Changed test to match output. Adding another data file for sqlt-diff test. Some bug fixes, better formatting when no field size is applicable. Updated to match new output; also had to remove "close STDERR;" to get working. Now installing sqlt-diff. The manual isn't in a state to send out. Small change to make installation easier. Updated with what I can think has changed. Fixed conflicts. Added "SIGN" option. Adding SIGNATURE. Adding SIGNATURE file. Apparently it's unnecessary for me to put the SIG in the MANIFEST as it's Modified comment rules. Updated with comment tests. Requiring a recent version of Spreadsheet::ParseExcel as an earlier version Fixed bug in script which didn't actually escape single ticks in string vals. Added options for new Dumper producer. Adding a producer to replicate (and extend) what was being done in the Fixed indentation. Getting translator options. Serializing translator options. Changing to use new Producer. Some fixes. Removed "sort" as it puts the tables in a different order than they appear in Moving to test module not script. Adding test for Dumper producer. Added "takelike" option. Added tests for Dumper producer. Add test for closing file. Updated to match new YAML output. Used code found to fix reading of STDERR (?). Some editing to make "make test" happy. Requiring latest version of XML::Writer where error was fixed. Oops! Fixed bug. Fixed per Tony Bowden to allow all integer fields (including LARGEINT, Adding Manual now that it is somewhat useful. Now with content! Fixes. Added section on Dumper producer. Added a couple more sections. Added examples of plugging in custom parsers and producers. Just a couple tweaks. Massaging of Oracle varchar2 field if larger than 255. Adding test data for Access parser. Committing new Access parser (yeah, Bill, we're coming after you now). Added Access data types. Adding tests for Access parser. Added alternate way to declare unique constraints. Updated tests for alternate unique constraint. Upped version. We should release soon. Give higher billing for manual. Added bit to help. Fixed error in output. Fixed merge. Some changes to get tests to pass. Fixed problems with non-unique names. Added "enum" to string types. Added 'date' to "string" types. Fixed documentation in template. Fixed docs. Adding new Module::Build-based build script. Changing to use Build.PL/Module::Build. Updated. Fixed naming of file. Removing commented-out code. Just some cosmetic changes to the docs. Removed some old code, make table type "InnoDB" if there's a FK constraint. Make sure there's some size value for a character-based field. Add indexes for FKs as necessary. Check for >255 field size for all char fields (not just varchar); turn a Take the defined field size if present. Added "version" argument to show SQLT::VERSION. Adding Jess's DB2 DBI parser. Applying patches from Markus Törnqvis. Fixed problems in POD. Allow "gutter" to be set by producer arg (Markus Törnqvis). Allow "gutter" to be set by producer arg (Markus Törnqvist). Some updates to tests to skip when required dependencies are missing. Bug fix to keep FK constraints from being created twice, expanded a couple Moved "interval" rule. Trying to be smarter and stricter on translating FKs. The default too often was "tinyint," changed to only make it that when That last commit wasn't very thought-out. OK, I really got it this time. Fixed syntax error. Fixed POD. Fixing the error "Can't use an undefined value as a HASH reference at Moved graph tests out. Tests moved from 13graph.t. Accepted changes from Eric Just. Made the field definition rows alternate "even" and "odd" classes so Some changes that should have been applied a while back. Some cosmetic changes. While trying to create CDBI classes myself, I found some of the decisions Changes to work with latest MySQL TIMESTAMP columns. CPAN is complaining about the version, so I made a trivial change to get Added tests for other comments. Added "official" table and field comment rules. Field comments weren't printing. Put field and table comments into proper MySQL syntax. Added Ben, changed my name. Testing CVS checkin, added two new developers. The short arg for "font-size" was colliding with the "f" for the parser. Fixed usage docs. Just committing, but is it necessary? It will be regenerated by Build.PL. Moved YAML to a build requirement. Changes to make more efficient use of memory with large tables. Changed my name. Changed to build up schema from "show create table" statements and then Updates. Added grammar for handling "unique" and "key" qualifiers to field definition. Added tests for "unique" and "key" qualifiers to field definitions. Removing. Removed SIGNATURE. Commiting. Added use of graph. Fixed handling of NULLs. Adding option to skip tables. Adding option for skipping tables. Put double quotes and backticks around table identifiers to test rules. Added double quote rule for table/field identifiers, cleaned up some code Per a suggestion from Darren, changed the skipping of tables in the GraphViz Fixed typos. Upped requirement of Parse::RecDescent. Added patch from user. Added "FULLTEXT" as valid index type. Added test for fulltext. Some code cleanup, added clustering of tables, fixed a bug that kept circular Upped version to 0.10. Return lines of output in a list context per user request. Fixed name, perl. Fixed my name, Perl shebang. Return lines of input in a list context, fixed my name. Lots of cleanup. Lots of cleanup, removal of "foo" variables which are opaque. Fixed "database_events". Fixed up "equals" to be more informative when there's a problem. Fixed to pass tests. Fixed "database_events." Cleaned up, Template version no longer a problem(?). Changed database_events to return an arrayref. Cleaned up a bit, checked for interactive tty so that ugly warning Checked for interactive so no ugly messages during test suite. Cleaned up, fixed db events. Now passes. No warnings during tests -- it's ugly. No warnings on test. Template version doesn't matter. Fixed "database_events." Database events now returns a list. Fixed "database_events." Fixed "database_events." Use three-arg open. Use three-arg open. Just made a little more readable. Need to call "load" as a class method rather than an object method. Make a little more flexible. Catch case of specifying 'DBI-Driver' and quietly fix this. Added a little more POD on clustering. Fixing my name (is this vain?). Adding. Added test for list context return. Added return of diff as array in list context. Changes to handle a constraint like: Added tests for check constraint. Test and data for FK in SQLite. Parsing of foreign keys. Resolves RT#48025. Use Readonly for constant value, some aesthetic changes. Added "using btree" for test. Added parsing of index types (btree, rtree, hash). Adding patch from user. Some whitespace changes, create a default index name if one is not provided. Just whitespace changes -- I wouldn't normally commit something like this, but Smoother regex (thanks Jim Thomason). Fixed my name, fixed some use of uninitialized vars. Just fixed some whitespace. Resolves RT#35448. Resolves RT#21065. Resolves RT#8847. Fixed some whitespace, resolve RT#13915 by requiring XML and YAML libs. Upped version for 0.10 release. Upping version per RT#43173, fixed my name. Some whitespace fixes, resolve RT#42548 (incorrectly inserts the size in Added patch from user (RT#42548). Fixes per RT#37814 (parsing of field/index names with double quotes, also fix Added tests for RT#37814 (parsing of double quotes, autoincrement). Fixed test. Leave out empty bits. Fixed test. Just a cleaner version of "next_unused_name." Removing, make dist smaller. Upped version numbers, cleaned up code, fixed my name. Changed version number to stay consistent with new scheme. Changed "no_columns" arg to "num_columns" (I used to use "no_" for both Added "skip_tables" options. When adding the edges, it's necessary to quote the table names exactly the sqlt-graph now has a --trace option. MySQL Parser now handles views more completely Some aesthetic changes Added "tables" and "options" methods to Schema::View Allow VALUEs to be enclosed in double and single quotes, specifically Larry Leszczynski (1): Suppress "Exiting subroutine via next" Lukas Mai (2): image is returned, not written w/o out_file (bug #71398) binmode STDOUT to not generate garbage in a UTF-8 environment (bug #71399) Mark Addison (168): Moved Producer::XML to Producer::SqlfXML. Fixed default value bug in Parser::SqlfXML. Added TTSchema producer. Added BUG notes and test about single tags e.g. <foo/> Changed term single tags to empty tags to mean <foo/> like tags, it being the correct term :) D'oh! Fixed comment typo that chaged the meaning of the comment. Fixed bug with emit_empty_tags. It now works and we get more explict values for things instead of lots of empty sets of tags ie <foo>0</foo> not <foo></foo>. Added a test for Producer::SqlfXML. Added explicit version of 1.13 for XML::XPath as the parser doesn't seem to work with older versions. Added attrib_values option. Added support for the attrib_values option of the XML producer. Initial version of XMI parser. + Added visability arg. Refactored the internals so that the XMI parsing is seperate from the More refactoring and code tidy. We now have get_attributes and Added parsing of operations from the XMI. Do we have enough to do the Rational Profile now? Split out XMI parsing to SQL::Translator::XMI::Parser. All the XPath is Doc notes on version selection and added xmi.id to specs. Started on rational data modeling profile. Added lib/SQL/Translator/XMI/Parser.pm Moved Rational profile code to its own mod. Added support for tagged values, so The data, as it is being parsed, is added to $self->{model}. Anything seen before Different versions of XMI now handled by sub-classes (still generating methods Changed debug to dump xmi model data and not just classes. AssociationEnds gets an 'otherEnd' ref added so you can navigate associations Added FKeys from associations Added some test xmi files for rational profile (tests coming soon ;-) Moved visibility test to its own .t Tidy up, as they seemed to be a bit of a mess. Removed debug warning Added test for the Rational profile. Made dataType a proper obj instead of just a name. Added tagged values to DataType's Fixed serious bug in xmiDeref: was only working for tags with xmi.idref Added multiplicity to AssociationEnds. Removed some unneeded, commented out code. Fixed broken otherEnd linking for assoctionEnds. Removed TODO comment- cos its done Changed ends to associationEnds in the Association's spec. Made debugging work and it now exports its parse method. Initial code for SQLFairy UML profile for the XMI parser. The name may need PKeys automatically generated for Classes that don't set them explicitly with Added m:n joins. Can now model m:n association in your class diag and the POD update for m:n stuff. Fix: Now includes the Schema's name and database attributes Added missing test for attrib_values=>1. Moved all the XMI stuff from MANIFEST to MANIFEST.SKIP and added missing Fixed broken --emit-empty-tags option. Added Views, Procedures and Triggers to bring it inline with the current Schema features. Added Views, Procedures and Triggers to bring it inline with the current Schema features and updated producer. When using attrib_values the attribs are now written in alphabetical order Refactored attrib ordering fix. Handles order given as attrib. Added list of constants exported to the docs. Order of schema objects properties in XML changed to something more sensible reference_fields now returns an empty list (or array ref) for constraints that Fixed so they test for empty list reference_fields and not the old undef return. Test::More Schema testing. Uses Test::SQL::Translator.pm Added schema_ok. Some tweaks to the test output. Now uses schema_ok Now uses Test::SQL::Translator Fixed test count. Added Todo Removed unused Test::Exception Added lib/Test/SQL/Translator.pm Added stringify to name and error check to stop creation of object without a name. Tests for Table and Field stringify. Tweaked to use (test ;-) table and field stringify. Removed check on field name when adding fields, as the fields constructor does it now. Added full_name. Doc tweaks. Doc tweaks. Added shortcut method to get the fields Schema object. Added test of field schema shortcut and table stringify. Changed the 'id' field to 'age' on the first test constraint, so that one of its fields returns Field objects when it can. Added field_names(). Uses field_names when making constraint YAML to avoid field overloading. Tests of Constraint::fields() object return and Constraint::field_names(). Removed __WARN__ handler. Opps, forgot the sub seperator comment. Added field_names() and field/constraint lookup methods Added test of Table::field_names() Tests for the Table, field and constraint lookup methods Added TT::Table producer. Doc fix. Initial crack at a base class for building TT based producers. As a quick test to see if Test::SQL::Translator really is quicker added tests Added hook for sub classes to set Template config. Docs next... Added test for tt_config Tweaked tt_schema Documentation. Initial stab at a section on building producers with TT::Base Tweaked to use the default reading of __DATA__ Added experimental pre_process_schema method. Doc tweaks. Updated to produce the new, single format sqlf xml. Updated to parse the new, single format sqlf xml and emit warnings when the old style is used (but still parse it). Updated to test the new, single format sqlf xml. XML test src changed name. *** empty log message *** Added docs about the legacy format xml. Added items about the change of XML format and additional TT based producers. Added producer args to control indenting, newlines and namespace prefixing. Updated with the new XML producer args. Added writing of field.extra Allow extra to be set via constructor. Added parsing of field.extra Fixed break due to XML Parser now supporting field.extra XML test file changed Move the list of methods to write as elements out into a global. Added collection tags for the Schemas objects (tables, views, etc) Doc tweaks Doc tweaks Doc tweaks Doc tweaks Doc tweaks. Added SQL::Translator::Schema::Object, a base class for all the Schema Added diagnostics on fail. Added _attributes class data to SQL::Translator::Schema::Object for sub classes All Schema objects now have an extra attribute. Added parsing support (and Added writing of extra data for all objects to XML producer. Added extra and Schema::Object stuff Refactored producer() and parser() to use a sub, _tool(), implimenting their Factored _load_sub() out of _tool(). Ground work for adding filters. Re-added ttvars. Added docs and test for ttvars. Applied Dave Howorth's MySQL parser patches Deprecated ttargs producer arg in favour of tt_vars Added tt-vars to pass variables to templates. Template config is now passed using tt_conf producer arg. Deprecated passing Added tt-conf option Added TTSchema changes. Clean up Added schema filters Re-added proper error trapping and reporting when trying to set the parser, localised setting of %Data::Dumper::Maxdepth (and Indent), to stop it polluting Removed (annoying) warning when order attributes are not used. Fixed test name Made extra interface more context sensative. mysql_table_type extra data and InnoDB derivation fix. A SQLServer producer. (They made me do it! :( Fixed YAML testi. Failed due to sqlfairy version increase and YAML not putting Fixed. Broke due to changes in YAML. Added hack to remove the Graph stuff hack of adding a ref to the translator, to Upped version of Test::More req so we get upto date is_deeply(). Fixed error propogation when loading filters. Fixed broken comment in test (from Chris Hilton patch) Added Chris Hilton's patch so version number test is based on current version and not hardcoded. Added explicit close on FILE and fixed spelling mistaked (from Chris Hilton patch Applied Chis Hilton patch to fix. (Test used non existant test file) Fixed InnoDB table type errors (Chris Hilton patch) Applied Chris Hilton patch to remove 'Use of uninitialized value in addition' warnings Apllied Chris Hilton patch to remove subroutine redefinition Parses extra attributes for tables. Tweaked debug output. Added mysql extra attribs for charset and collation. Added test for mysql_table_type. Removed dodgey testing code so test works for everyone else. Tweaked filter interface to take args as a list. Added mysql_character_set for 4.1+ Experimental filters Fixed missing schema table comments Build support for installing templates. Intial code for Dia producer Backed out M::B ConfigData based installed templates and moved DiaUml Removed un-used stuff left from CutNPaste of t/36-filters.t Added Globals filter. Fixed warning for tables with no order attrib Added constraint copy to Globals filter. (Seem to be getting lots of warnings from YAML now...) Globals filter now copies extra Mateu X Hunter (1): Test Str and ArrayRef input for SQLT->filename Matt Phillips (1): remove spurious warning from 6+ years ago. Michal Jurosz (1): fix doc typo Mikey Melillo (4): added need for Excel Spreadsheet parser module :) Init Check in. This follows closely along the lines of xSV.pm but its cooler added a text spreadsheet, hopefully in the right directory and such Moritz Onken (1): Make Pg producer consistent with the rest in terms of quoting Nick Wellnhofer (1): Fix erroneous PostgreSQL floating point type translations (RT#99725) Paul Harrington (11): dump out views and stored procedures extract out text of stored procedures and triggers Storable stuff load test for sybase DBI parser add methods for dealing with procedures reverting to previous revision as I had duplicated some of the subroutines to handle procedures... sorry about that! store in network order still a bit hacky .. would like to use a method that will work on either "store" or "freeze" data. This will still break if you are passed in a file that contains the result of a freeze use DBI and a recent revision of DBD::Pg This module does not actually use any DBD::Pg calls, just DBI. However, the DBI calls are delegated to DBD::Pg and the revision on CPAN (1.22) does not have very usable table_info() and column_info catalog methods. Put in a nagging message so that the user does not get too shocked by the poor results if running with <1.31 sp_columns on Sybase 12.5.2 (and, perhaps, earlier) needs to be passed nulls explicitly so pass in list of undefs to column_info Peter Mottram (3): fix Producer::SQLite::batch_alter_table rename field initial batch_alter_table for ::Producer::PostgreSQL test to demonstrate Pg diff failure with rename_table+rename_field Peter Rabbitson (184): Add sqlite roundtrip test (probably need to do the same for the rest of the parser/producer combos, possibly using a known xml schema as a starting point) Release 0.09003 Update autogenerated makefile Another chunk of the GraphViz rewrite: Strip evil svn:keywords This file is empty, tests seem fine... deleting Actually there was an empty test for it as well :) Force everything to 1.99, hopefully will work Reduce $Id to its normal form Minor test cleanup Cleanup part 2 Remove all expansion $XX tags (isolated commit, easily revertable) Forgot to up one VERSION Schema::Graph - switch ugly use of Log4perl to an even uglier (but at least available) use of Class::Base for debug messages Downgrade global version - highest version in 9002 on cpan is 1.58 - thus go with 1.59 Whop - forgot to commit 9004 changes Support for mysql fully qualified table names (db.table) by Debolaz MySQL parser patch by Tokuhiro Matsuno Make minor adjustments to the grammars in order to work around Pg producer improvements by mo: ALTER TABLE / ALTER COLUMN / DROP DEFAULT Minor POD fix Merge 'trunk' into 'roundtrip' Somewhat working global roundtrip test Add an oversized varchar to schema.xml Fix mysql roundtrip glitch Test seems to be finioshed - massive bowl of FAIL Bail out from failing tests early Translator/schema dependency test Todoify test - probably not in this lifetime Changes to tests to go along r1512 Teah xml parser about database_events Extra data and first test for xml database_event support Improve xml database_event deprecation warning Teach sqlite how to deal with multi-event triggers Adjust xml-db2 tests Add Carp::Clan to dependencies Merge 'trunk' into 'roundtrip' Test XML roundtrip as well (also fail) re-fix tests after merge Bring version bump back - there is still work to be done (reoundtrip branch), I think this is too early Remove duplicate req Rewind exhausted globs before attempting a read Do not add xml comment header if no_comments is set Better debug output table/field counts are held per-object, not globally Concentrate on testing the 'big 3' for now Variable table and column names? wtf?! SQLite improvements: Adjust insane tests to pass (the expected returns at times are mind-boggling) Fix a couple of mistakes in the PG parser Add disabled YAML roundtrip test Update changelog Add POD checker and fix a couple of POD errors Reorder authors and consolidate them in one point (AUTHORS). Add link from main POD Switch to Module::Install Merge forgotten rewrite of the GraphViz producer - keep all the logic intact but do not do any parameter sanity checking - just pass things straight to GraphViz.pm Graphviz does not like empty hahsrefs - prune those VIEW support for Pg parser, also some cleanups Add parenthesis into the VIEW definition to make sure the pg parser still can deal with them The way we generate create view statements is not standards compliant (per RhodiumToad in #postgresql) Multiple fixes for the SQLServer producer/parser combo Merge 'trunk' into 'roundtrip' Make perlpod happy Merge 'roundtrip' into 'trunk' Strip exe bit Fill in changes, todoify non-passing tests (to draw attention) Release 0.09005 Fix a couple of tests and add forgotten dependency Teach YAML producer to encode extra attributes Create a YAML copy of the main roundtrip schema - this is what we use to run t/60roundtrip.t without depending on LibXML. The YAML is regenerated on every Makefile.PL run Switch the roundtrip test to the yaml base source. Add test skips More test sanification Proper support for size in pg timestamp columns (patch by mo) Saner Changes (bugfixed) release 0.09006 Backout teejay's changes to get trunk stable again Why the fuck did I do that ??? Mssql does not understand ON DELETE RESTRICT Do not cache the P::RD grammar - it breaks stuff MSSQL fixes and improvements Release 0.09007 The indexer does not understand license links Fix for bleadperl Use accessors instead of reaching down object guts YAML is a test-dep (or at least it's a good thing to have) Release 0.11001 Makefile fixes + changes Release 0.11002 Sanify test Real 0.11002 release Fix pg matchtype parsing Changes Cleanup changelog Someone claimed this doesn't work proper pg timestamp parsing (by mo) Fix RT#27438 Enforce XML::LibXML version requirements Make maybe_plan insufficent-version-aware Dep fixes and changes Release 0.11003 Add a size/precision field to test file, make sure xml parser/producer work fine Add numeric/decimal precision support to DB2 producer Fix TT templates and tests Fix RT49301 Oracle/SQLite test adjustments to deal with new testdata field Changes changing SQL::Translator::Diff to use producer_args instead of producer_options Cleanup tabs/whitespace Changes Changes2 Oracle fix primarily to have it not capitalize but quote instead Release 0.11005 Merge 'oracle_datatypes' into 'trunk' Get SQLite to produce saner output Changes Changes reformat Make 'default_value' behave like a real accessor Fix index quoting for mysql Adjust view production for stupid mysql Awesome non-quoted numeric default patch by Stephen Clouse Release 0.11006 Even though it is in the depchain due to YAML - just go ahead and declare it Bump M::I dep to a version not abusing FindBin Turn the roundtrip source generation fail into a warning Add support for PostGIS Geometry and Geography data types Fix some legacy code to stop warning on newer perls Support a custom_type_name hint for Pg enum creation Fix sqlt options not matching documentation (RT#58318) Changes from abraxxa Fix MySQL producer attaching View definitions to a schema Correct postgis geography type insertion and linebreak fix for multiple geometry/geography columns .gitignore Make autoinc PKs show up in GraphViz Fix RT#64728 Better diagnostics Parse new SQL Server stuff Do not depend on implicit hash ordering in YAML load gitignoring patch from rt67989 applied, changes dependency from Digest::SHA1 to Digest::SHA Add giftnuss to contributors, awesome triage work Drop Class::Accessor::Fast in favor of Moo Use precompiled Parse::RecDescent parsers for moar speed Tab/WS crusade Combined patches from RT#70734 and RT#44769 Fix misleading Diagram POD Fix ignored option to script/sqlt-diagram (RT#5992) Changes for a1afcdb6 Add forgotten contributors from various patches: Fix spurious whitespace failures in t/17sqlfxml-producer.t (RT#70786) Pod fixage Deprecate SQL::Translator::Schema::Graph, undocument as_graph() Add a git mailmap Fix incorrect ordering in test (fails under unstable hash order i.e. 5.8.1) Fix MANIFEST.SKIP (MYMETA fail) Quote all dep versions (preserve trailing 0's and whatnot) Forgotten dependency used in bdf60588 Add forgotten test skip after removed dependency in 1abbbee1 Correct SQLite quote-char Really fix mysql CURRENT_TIMESTAMP handling (solves RT#65844) A set of placeholder directories for future refactoring Move ProducerUtils into the new dir layout Merge branch 'people/frew/mega-refactor' Back out bdf60588b to disable P::RD grammar precompilation - until P::RD is fixed Silence prove -w warnings Incomplete revert in 0eb3b94a5 Dependency cleanup Switch to sane subcommand launch in tests Stop the DBI parser from disconnecting externally supplied DBI handles Fix silly syntax error, introduced in 0c04c5a22 Add Jess as authority for new namespaces Fix broken plan Hide deprecated stuff Resurrect the DB2 precompiled grammar to which we lost the source This seems to no longer be used anywhere...? Moar documentation, shape up license/copyright notices Fixor borked docs (we already test for this in t/66-postgres-dbi-parser.t) Remove old dist tarball which snuck in during a14ab50e Remove SQL::Translator::Schema::Graph as announced in 0.11011 Moo port makes both of these no longer needed Both of these are pre 5.8 coredeps Lose one more useless dependency XML writing is not strictly necessary for everything else to work No versions in use statements - encourages shit like autorequires Properly tag our XML namespace URI - it is not a real link Rafael Kitover (7): normalize SQLite and Postgres version numbers remove unnecessary bit of code patch from abraxxa (Alexander Hartmaier) to truncate unique constraint names that are too long update Changes Fix POD typo in SQL/Translator/Schema/Trigger.pm (RT#63451) Add support for triggers in the MySQL producer (RT#63452) ilmari++ tests for MySQL Producer triggers Ricardo Signes (2): properly compare fields fix test expectations Robert Bohne (3): Call ->on_delete & ->on_update in SCALAR context not in LIST context Handle on_delete => 'restrict' in Producer::Oracle Change my E-Mail address Ross Smith II (15): MySQL allows for length parameter on PRIMARY KEY field names Applied patch #780781: Missing binmode in Producer/GraphViz.pm. Applied CPAN patch #3235 MANIFEST: missing some files [patch] Applied CPAN patch #3247: Missing ora_data_type dec(imal) Fixed ENUMs (They were coming over as ENUM(n)). Stops Use of uninitialized value in numeric gt (>) at /usr/local/share/perl/5.6.1/SQL/Translator/Parser/MySQL.pm line 594 zerofill has translated as 1 We need to quote the enum field values Added test file that includes all (well, most) of the DDL for MySQL Added test file that includes all (well, it will) of the DDL for Postgres Lots of Postgres fixes: Added quotes around ENUM values in CONSTRAINT for Oracle & Sybase. Added DOUBLE PRECISION. Parser::MySQL chokes on this Added FIXED as DEC per Fixed ORA-02329 and ORA-00907 errors. Sam Angiuoli (1): added Sybase producer Scott Cain (2): revamped Pg DBI parser to query system tables. This still doesn't work refining the parser, but it still doesn't do everything it should: Sebatian B. Knapp (3): Add (now passing) test with file from RT#70473 fixed typo reported in rt68912 Fix for mysql producer drop primary key, refs #62250 Songmu (1): Add SQL_TINYINT and SQL_BIGINT to %SQL::Translator::Schema::Field::type_mapping Stephen Bennett (1): Use CASCADE when dropping a postgres enum type, to be consistent with table drops Tina Mueller (3): add support for "DEFAULT (\d+)::data_type" in PostgreSQL Parser fix diff for altering two things per column - add ; at the end produce_diff_sql(): list context Vincent Bachelier (2): quote properly all table name test quoted for mysql Wallace Reis (5): Added CREATE VIEW subrules for mysql parser Produce DDL for MySQL table field with ref in 'ON UPDATE' Fix warning due uninitialized var Extend Field->equals() for numeric comparison Add more data_types for sql_types mapping William Wolf (1): Fix Pg diff issue with drop constraint on primary keys clodeindustrie (1): Add Json and hstore types in the PostgreSQL parser giftnuss (1): Change mysql parser to throw exceptions on unspecified default values (RT#4835) gregor herrmann (1): Imported Upstream version 0.11021 rporres (1): Fixed autoincrement in primary keys for SQLite ----------------------------------------------------------------------- No new revisions were added by this update. -- Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/pkg-perl/packages/libsql-translator-perl.git _______________________________________________ Pkg-perl-cvs-commits mailing list Pkg-perl-cvs-commits@lists.alioth.debian.org
|
https://www.mail-archive.com/pkg-perl-cvs-commits@lists.alioth.debian.org/msg48381.html
|
CC-MAIN-2018-51
|
refinedweb
| 16,004
| 65.12
|
What is code for getting the time from the computers clock?
What is code for getting the time from the computers clock?
Those who live by the sword get shot by those who don't.
I can C.
Compiler: gcc
Code:#include <iostream> #include <ctime> using namespace std; int main(void) { time_t TheTime; time(&TheTime); cout << ctime(&TheTime) << endl; return 0; }
Is there a way to get JUST the time, like in int or long format?
Brendan
What do you mean by JUST the time? Hours? Minutes? Seconds?
Look into the tm struct, its member variables hold all kinds of different measures of the current time.
Code:#include <ctime> int main(int argc char *argv[]) { using namespace std; time_t currentTime = time(NULL); tm *timeInfo; timeInfo = localtime(¤tTime); cout << timeInfo->tm_hour << endl << // hr (0 - 23) timeInfo->tm_min << endl << // min (0 - 59) timeInfo->tm_sec << endl; // sec (0 - 59) return 0; }
Last edited by Eibro; 09-04-2002 at 07:05 PM.
The seconds works. Thanks! The reason I was wondering was because I wanted to make a random number generator that is seeded based on the current time, which meant I couldn't use the date and all that. This way, it may be a bit more random than it usually is. Thanks again!
Brendan
You could use the Windows API function GetLocalTime() to initialize an instance of the SYSTEMTIME data structure...
Code:#include <windows.h> #include <iostream> int main() { SYSTEMTIME st; GetLocalTime(&st); std::cout << st.wYear << std::endl << st.wMonth << std::endl << st.wDayOfWeek << std::endl << st.wDay << std::endl << st.wHour << std::endl << st.wMinute << std::endl << st.wSecond << std::endl << st.wMilliseconds << std::endl; exit(0); }
|
https://cboard.cprogramming.com/cplusplus-programming/24261-time.html
|
CC-MAIN-2018-05
|
refinedweb
| 278
| 76.72
|
87781/how-to-describe-a-key-pair-using-boto3
Hi Guys,
I have created an AWS account. I want to use AWS services for automation purposes. In my use case, I am using the Boto3 module. Can anyone tell me how can I describe the key pair using Boto3?
Hi@akhtar,
You can find one method in your boto3 module named describe_key_pairs. This method can be used to describe the key pairs.
import boto3
ec2 = boto3.client('ec2')
response = ec2.describe_key_pairs()
print(response)
Stop the EC2 instance for which you ...READ MORE
You can use method of creating object ...READ MORE
You can delete the file from S3 ...READ MORE
You can delete the folder by using ..,
You can delete a key pair using ...READ MORE
Hi@akhtar,
Boto3 supports upload_file() and download_file() APIs to ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.
|
https://www.edureka.co/community/87781/how-to-describe-a-key-pair-using-boto3
|
CC-MAIN-2021-43
|
refinedweb
| 161
| 78.55
|
Aug 06, 2019 02:49 AM|Majid Abu Rmelah|LINK
Hey,
I have been trying to add reCaptcha v3 to my website but I found many difficulties.
When I searched on Google for a way to do this in ASP.NET, the search results were all about MVC.
I don't use ASP.NET MVC, rather I use IIS Sites.
Any idea how to do this?
Thank you,
Majid Abu Rmelah.
Aug 06, 2019 06:15 AM|KathyW|LINK
Since MVC sites can also run on IIS, I assume when you say you use IIS Sites that you mean Web Forms sites?
In that case look at
The Default.aspx and Default.aspx.cs files show an example.
Aug 06, 2019 03:16 PM|Majid Abu Rmelah|LINK
I have added a label and I made it hidden. and I changed its text to the token.
this is what I added in .aspx:
<script src=""></script> <script> grecaptcha.ready(function () { grecaptcha.execute('6LckfbEUAAAAAIXYauEmhossP8H5-03ArRSkpyf1', { action: 'homepage' }).then(function (token) { console.log(token); document.getElementById('<%= TokenLabel.ClientID%>').textContent = token; }); }); </script>
This is my .cs files.
private static string Token = string.Empty; protected void Page_Load(object sender, EventArgs e) { Token = TokenLabel.Text; } private static ResponseToken response = new ResponseToken(); [WebMethod] public static string CaptchaVerify() { if (response.score == 0) { var responseString = RecaptchaVerify(Token); response = JsonConvert.DeserializeObject<ResponseToken>(responseString.Result); } return JsonConvert.SerializeObject(response); } private static string apiAddress = ""; private static async Task<string> RecaptchaVerify(string recaptchaToken) { string url = $"{apiAddress}?secret=6LckfbEUAAAAAO4T0On0GekIPlDDJhs9QJ5rBCfq&response={recaptchaToken}"; using (HttpClient httpClient = new HttpClient()) { try { string responseString = httpClient.GetStringAsync(url).Result; return responseString; } catch (Exception ex) { throw new Exception(ex.Message); } } } public class ResponseToken { public DateTime challenge_ts { get; set; } public float score { get; set; } public List<string> ErrorCodes { get; set; } public bool Success { get; set; } public string hostname { get; set; } }
I got the following warning: CS1998: This async method lacks 'await'...
Now, what should I do when the user clicks on the.
As to the asynch message, that's a warning. You can ignore it if you want.
Aug 10, 2019 01:50 AM|Majid Abu Rmelah|LINK
I got this error in .aspx when I tried to run the Default.aspx inside the folder that I downloaded from GitHub.
#Error Could not load type 'GoogleRecaptchav3_example_In_asp_net.Default'. C:\inetpub\wwwroot\GoogleRecaptchav3-example-In-asp-net\Default.aspx 1
Then when I removed < ... and I runned Default.aspx, I got these errors in the console:
What should I do?
Aug 10, 2019 10:03 AM|KathyW|LINK
I have no idea why you get that error. If I copy the example files, it works. (There is no need to put the files under that long folder name, and just put the
code under your own namespace, like any other of your pages.)
Aug 11, 2019 01:33 AM|Majid Abu Rmelah|LINK
Alright, everything worked perfectly, I guess.
When I added everything accurately this time and I debugged the .aspx page, the token passed successfully, I tried to register, it showed the message that I am a human being.
But, the registration process didn't continue. When I click on register, it should send my data to MySQL then it should redirect me to another page where it shows that my registration was successful. Unfortunately, that did not happen, the loading icon was next to the title of the page and nothing happened.
On the left bottom of the page I got the message "Waiting for localhost...". and my page is stuck like this.
What do you think the problem is?
Aug 11, 2019 03:04 PM|KathyW|LINK
It's up to you to write code that checks the reCaptcha status at the right point and continues registration if it passes and returns to the page with an error message if it doesn't.
The sample simply gives you the basics of implementing and checking the reCaptcha.
Sep 05, 2019 12:41 AM|Majid Abu Rmelah|LINK
Sorry for late replay update, I have been busy.
There is only one thing that I am failing to do.
How can I trigger the button that is in c# file through JavaScript if the score is greater than 5? I have tried to do $ajax inside an $ajax but I failed.
10 replies
Last post Sep 08, 2019 12:35 AM by Majid Abu Rmelah
|
https://forums.asp.net/t/2158503.aspx?How+to+implement+reCaptcha+v3+in+ASP+NET+IIS+Sites
|
CC-MAIN-2020-10
|
refinedweb
| 722
| 68.36
|
Heapq module in Python
Get FREE domain for 1st year and build your brand new site
In this article, we will explore the heapq module which is a part of Python standard library. This module implements the heap queue algorithm, also known as the priority queue algorithm. But before proceeding any further, let me first explain what are heaps and priority queues.
Heaps
A heap is a data structure with two main properties:
- Complete Binary Tree
- Heap Order Property
A complete binary tree is a special type of binary tree in which all levels must be filled except for the last level. Moreover, in the last level, the elements must be filled from left to right.
Question
Which of these correctly represents a complete binary tree ?
Heap Order Property - Heaps come in two variants:
- Min Heap
- Max Heap
Min Heap - In the case of min heaps, for each node, the parent node must be smaller than both the child nodes. It's okay even if one or both of the child nodes do not exists. However if they do exist, the value of the parent node must be smaller. Also note that it does not matter if the left node is greater than the right node or vice versa.
Max Heap - For max heaps, this condition is exactly reversed. For each node, the value of the parent node must be larger than both the child nodes.
Priority Queues
Heaps are concrete data types or structures (CDT), whereas priority queues are abstract data structures (ADT). An abstract data types determines the interface, while a concrete data types defines the implementation.
A queue is a FIFO (First In First Out) data structure in which the element placed at first can be accessed first.
A priority queue is a special type of queue in which each element is associated with an additional property called priority and is served according to its priority i.e., an element with high priority is served before an element with low priority. It is most commonly implemented using a heap in O(log n) time.
In Python
heapq module, the smallest element is given the highest priority.
Now that we have a basic understanding of the underlying data structures, let's move on to the details of the
heapq module.
Implementation
Although heaps are generally represented by trees, it can also be implemented as an array. It is possible because heap is a complete binary tree. What does that mean?
Due to the "completeness" property, it is possible to know the number of nodes at each level, except for the last one. Let's visualise this with an example:
heap = [9, 17, 13, 21, 23, 15, 17]
You can see from Fig 2 and Fig 3, how a heap is implemented using a Python list.
This implementation differs from the textbook heap algorithms in two aspects:
(a) Zero-based indexing
(b) "Min heap" is used instead of "Max heap".
Mathematically,
heap[k] <= heap[2*k+1] and heap[k] <= heap[2*k+2]
for all k, counting elements from zero. For the sake of comparison, non-existing elements are considered to be infinite.
You can create a heap in two different ways:
- Initialize a list to [], and the add elements to it using
heappush().
- Transform a populated list using
heapify().
Example:
>>> import heapq >>> # Initialising an empty list and adding elements one by one >>> heap = [] >>> heapq.heappush(heap, 5) >>> heapq.heappush(heap, 2) >>> heapq.heappush(heap, 9) >>> heap [2, 5, 9] >>> # Using a populated list >>> heap = [5, 2, 9] >>> heapq.heapify(heap) >>> heap [2, 5, 9]
Heapq functions
Let us now explore the fuctions provided by this module.
>>> import heapq as heapq >>> method_list = [func for func in dir(hq) if callable(getattr(hq, func)) and not func.startswith(("__", "_"))] >>> method_list ['heapify', 'heappop', 'heappush', 'heappushpop', 'heapreplace', 'merge', 'nlargest', 'nsmallest']
- heapify()
It transforms a list into a heap. This tranformation happens in-place, i.e., the original list itself is modified, which happens in linear time.
Syntax:
heapq.heapify(list)
Example:
>>> import heapq as hq >>> heap = [5, 7, 1, 22, 9, 13, 12] >>> hq.heapify(heap) >>> heap [1, 7, 5, 22, 9, 13, 12]
Although, the list is not sorted, you can check that it still follows the min heap property.
- heappop()
It removes and returns (pops) the smallest item from the heap, i.e., heap[0]. Moreover, it ensures that heap[0] is replaced by the next smallest element and keeps the heap invariant. If the heap is empty,
IndexError is raised.
Syntax:
heapq.heappop(heap)
Example:
>>> heap = [] >>> # Raises IndexError if empty heap is passed >>> hq.heappop(heap) Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: index out of range >>> heap = [9, 17, 13, 21, 23, 15, 17] >>> hq.heappop(heap) 9 >>> heap [13, 17, 15, 21, 23, 17]
Notice after using
heappop(), how the orders of other elements is changed. This happens to keep the heap invariant, i.e., to preserve the min heap property.
- heappush()
It adds an item to a heap, maintaining the heap invariant. It is suggested not to use this function for any list, but only for those lists built using heapq functions.
Syntax:
heapq.heappush(heap, item)
Example:
>>> import heapq as hq >>> heap = [] >>> items_to_add = (2, 3, 4, 1, 10, 8, 2, 20, 15) >>> for item in items_to_add: ... hq.heappush(heap, item) ... >>> heap [1, 2, 2, 3, 10, 8, 4, 20, 15] >>> # Add one more item to heap using heappush() >>> hq.heappush(heap, -1) >>> heap [-1, 1, 2, 3, 2, 8, 4, 20, 15, 10]
- heappushpop()
It first adds the given item to the heap, then removes and return the smallest item from the heap. This is equivalent to using
heappush() and then a separate call to
heappop(), but is more efficient than using the two operations separately.
Syntax:
heapq.heappushpop(heap, item)
Example:
>>> # Using the heap from previous example >>> heap [-1, 1, 2, 3, 2, 8, 4, 20, 15, 10] >>> hq.heappushpop(heap, 0) -1 >>> heap [0, 1, 2, 3, 2, 8, 4, 20, 15, 10]
- heapreplace()
It removes and return the smallest item from the heap, and also push the new item. The heap size doesn’t change. If the heap is empty,
IndexError is raised. This is equivalent to using
heappop() and then a separate call to
heappush(), but is more efficient than using the two operations separately.
It is more appropriate to use this function when dealing with fixed-size heap.
Syntax:
heapq.heapreplace(heap, item)
Example:
>>> # Using the heap from previous example >>> heap [0, 1, 2, 3, 2, 8, 4, 20, 15, 10] >>> hq.heapreplace(heap, -2) 0 >>> heap [-2, 1, 2, 3, 2, 8, 4, 20, 15, 10]
The next three functions of this module is geaneral purpose functions based on heaps.
- merge()
It merges multiple sorted inputs into a single sorted output and returns an iterator. It does not pull the data into memory all at once, and assumes that each of the input streams is alread sorted.
It has two optional arguments:
- key - specifies a key function on how the comparison should be made between the elements. Default is
None.
- reverse - it is a boolean value, and if set to
True, then the input elements are merged in reverse order, i.e., from largest to smallest. Default is
False.
Syntax:
heapq.merge(*iterables, key=None, reverse=False)
One of the application of this function is to implement an email scheduler.
Suppose we want to send one kind email every 10 minutes, and other every 15 minutes. Below is the implementation on how we can merge these two schedules.
import datetime as dt import heapq as hq def email(frequency, subject): current = dt.datetime.now() while True: current += frequency yield current, subject fast_email = email(dt.timedelta(minutes=10), "fast email") slow_email = email(dt.timedelta(minutes=15), "slow email") merged = hq.merge(fast_email, slow_email) print(f"Current time: {dt.datetime.now()} \n") for _ in range(10): print(next(merged))
It gives the following output:
Current time: 2020-11-17 17:26:43.456526 (datetime.datetime(2020, 11, 17, 17, 36, 43, 456526), 'fast email') (datetime.datetime(2020, 11, 17, 17, 41, 43, 456526), 'slow email') (datetime.datetime(2020, 11, 17, 17, 46, 43, 456526), 'fast email') (datetime.datetime(2020, 11, 17, 17, 56, 43, 456526), 'fast email') (datetime.datetime(2020, 11, 17, 17, 56, 43, 456526), 'slow email') (datetime.datetime(2020, 11, 17, 18, 6, 43, 456526), 'fast email') (datetime.datetime(2020, 11, 17, 18, 11, 43, 456526), 'slow email') (datetime.datetime(2020, 11, 17, 18, 16, 43, 456526), 'fast email') (datetime.datetime(2020, 11, 17, 18, 26, 43, 456526), 'fast email') (datetime.datetime(2020, 11, 17, 18, 26, 43, 456526), 'slow email')
Here, both the inputs of the
merge() functions are infinite iterators. Notice the current time and the scheduled time of the emails. The emails are arranged according to their timestamps.
- nlargest()
It returns a list of the n largest elements from the dataset defined by the iterable. This iterable doesn't need to satisfy the properties of a heap.
It has one optional keyword argument:
- key - specifies a key function on how the comparison should be made between the elements. Default is
None.
Syntax:
heapq.nlargest(n, iterable, key=None)
Example:
The following dataset is taken from Heptathlon (Women) Event - Rio 2016 Summer Olympics. Heptathlon is combined athletic event made up of seven events. Scoring is done for each event and person with the highest score is termed as winner.
>>>>>
>>> import heapq as hq >>> top_3 = hq.nlargest( ... 3, result.splitlines(), key=lambda x: float(x.split()[-1]) ... ) >>> >>> print("\n".join(top_3)) Nafissatou THIAM (BEL) 6810 Jessica ENNIS HILL (GBR) 6775 Brianne THEISEN EATON (CAN) 6653
result is a string in which individual data in separated by newline.
result.splitlines() converted this string into a list of strings where each element consists of athlete name and score, separated by tabs. Now, the
key function specifies to use the floating point score for comparison.
- nsmallest()
This function has similar parameters as
nlargest() function, but returns n largest elements.
Syntax:
heapq.nsmallest(n, iterable, key=None)
Note: The last two functions,
nlargest() and
nsmallest(), performs best for smaller values of n. For larger values, it is more efficient to use
sorted() function. Also, when only min or max element is needed, i.e. for n=1, using built-in
min() or
max() functions is suggested. Also, if repeated usage of these functions is required, it is more efficient to convert the iterable into an actual heap.
Applications of heapq module
- Implementing schedulers, as shown in one of the emaxples above.
- Implementing priority queues.
- It also has several other usage in areas like Artificial Intelligence (AI), Machine Learning (ML), etc.
- Some optimization problems.
Exercise:
It is highly recommended to try these problems yourselves before consulting the solution.
- Implement heapsort algorithm.
Solution:
import heapq as hq def heapsort(iterable): heap = [] for item in iterable: hq.heappush(heap, item) return [hq.heappop(heap) for i in range(len(heap))]
|
https://iq.opengenus.org/heapq-module-python/
|
CC-MAIN-2021-21
|
refinedweb
| 1,833
| 65.32
|
Bug Description
A new kind of "part" for rename format would be great:
The possibility to use any exif tag as rename part would be really usefull ( ex: filenumber for canon camera)
See https:/
Related branches
- Damon Lynch: Approve on 2011-11-11
- Diff: 63 lines (+25/-6)3 files modifiedrapid/generatename.py (+13/-6)
rapid/generatenameconfig.py (+2/-0)
rapid/metadataphoto.py (+10/-0)
I'm sorry, I don't know what you mean by "part". Is this a feature request that is separate from accessing new EXIF tags?
The problem with allowing the user to specify completely arbitrary EXIF tags at runtime is that the formats for the data varies depending on the tag.
Let me try to be more understandable.
If I look at the contents of my CF card ( from a Canon400D)
/media/
what i want as a final filename is
400D_C146_0101.CR2
( C is an arbitary letter I'd manually change when I reset the view counter counter in my 400D. )
the $filenumber of exiftool gives me 146-0101.CR2 which is close to what I need
see in the DCIM spec ( http://
for more details about de structure of the DCIM structure.
The advantage of the $filenumber exif field is that it does not depend on the parent directory but only uses info already in the picture
Sorry for the noise but english is not my mother tongue
I would like to add this metadata value, but unfortunately exiv2 has a bug that results in a value of 0:
http://
Hello,
Looks like things aren't moving that fast on the exiv2 bug so I looked into the exiftool code and found out how the file number is encoded: in brain dead way: ( at least on my canon 400D)
in file : http://
look for string "# File number information (MakerNotes tag 0x93)"
I hacked rapid-photo-
Here is what I did:
in file metadataphoto.py: Here is my version of the shutter_count method ( bit shuffling to come with a 7 figures number:)
TODO: guard those changes with a check on the camera model number ( see exiftool code) ( maybe those shuffling should be in exvi2)
def shutter_count(self, missing=''):
try:
keys = self.exif_keys
if 'Exif.Nikon3.
v = self['Exif.
elif 'Exif.CanonFi.
v = self['Exif.
v = int(v)
d = (v & 0xffc00) >> 10
d += 0x40
v = d*10000 + ((v & 0x3ff ) << 4 ) + ((v >> 20) & 0x0f);
elif 'Exif.Canon.
v = self['Exif.
elif 'Exif.Canon.
v = self['Exif.
else:
return str(v)
except:
return missing
in file generatenamecon
TODO: create values specific to this use case: DCIM_DIRECTORY_
LIST_SHUTTER_
in file generatename.py in method _get_metadata_
elif self.L1 == SHUTTER_COUNT:
v = self.rpd_
if self.L2 in [ SUBFOLDER_
if self.L2 == SUBFOLDER_
if self.L2 == STORED_SEQ_NUMBER :
else:
if v:
elif self.L1 == OWNER_NAME:
v = self.rpd_
Those patches work for my Canon 400D ( and also for a old Canon A85 compact camera).
If theses changes are more or less in line with your "roadmap", I can put a real patch together.
I don't have a lot of DSLR so it's difficult for me to do regression tests ...
With those changes I can configure rapid-photo-
Hi Etienne,
Thanks for taking the time to work on this. It's the best way to get things done, as I'm currently extremely busy working on my doctoral studies.
Rather than submitting a patch, why don't you use bzr and submit your changes using launchpad. That way, you will be an official contributor to the code. That is a very good outcome for the project and for you!
Thanks again,
Damon
Hello Damon,
I'd be glad to contribute !
I'm gonne learn how to register a branch in launchapd and commit my changes.
( I'll try to make them as robust as possible).
BTW, Do you use a python IDE for developing rapid ?
Regards
Etienne
That's great, thanks. Personally I use Geany to code in python.
I revised the code you submitted to make it fit with how the rest of the code works.
There is an error in this function, which must be fixed:
http://
The generated value is not what I expect. This is what exiftool gives for a sample Canon 5D Mk II RAW file:
$ exiftool -f -FileNumber _MG_1929.CR2
File Number : 100-1929
but the code gives this incorrect value: 1280000
I don't understand Perl, so if you could fix the bug that would helpful, thanks.
Hello, Damon,
My code was guarded with a test for a specific camera model ( 400D) because the decoding is DSLR model dependent.
Unfortunaely, I'm not rich enough ;-) to afford me a 5D mkII...
Would you be kind to send me ( or attach it to this bug) a sample image and I'll try to figure out how to do it on 5D mkII images.
( the "ShutterCount" exif tag might work but I have to test it and dig a bit futher into the exiftool source code...)
Regards
Etienne
I didn't realize it was model specific. I assumed it wouldn't be. You can
find some samples 5D Mk II RAW files here:
http://
If it becomes too difficult, one solution that will work (albeit slowly) is
to call exiftool and get it's results. I'm going to have to do that for
another feature request, which is to read metadata from MTS video files,
bug #695517. So implementing that will in itself add a dependency on
exiftool.
Hello,
Dealing with the maker notes is a big maintenance burden... so I think calling exitool is a lot less work.
If you want me to reimplement the feature based on exiftool.. please tell me when you a have a branch ready with your other feature request. I'll add it in a branch based on that.
Would you be kind to give me some advice on how to implement to be "compliant" with the rest of the code ? That would make the merging easier for you
Kind regards
Etienne
Hi Etienne,
The primary problem with the merged code last time was that it didn't match the design of the file renaming preferences. Things like stored values have very specific meanings. There are three levels of renaming: category, value, and formatting. A stored value is in the category sequence, and its value is a counter that increments and can be set in the user preferences, and its formatting is the number of digits it should take up.
I have already done the work to get exiftool working with MTS videos, but I'm not finished. The next step is to have exiftool be the default fallback option for all videos, as for some reason Fedora does not package hachoir metadata or kaa metadata. That means Fedora users currently cannot download videos using Rapid Photo Downloader. Using exiftool, they should be able to. I need to better integrate the exiftool code I've written. When I've done that, you can add the function to call the exiftool code for the file number. It should be really easy.
Best,
Damon
Hi Etienne, I just went ahead and implemented the solution using ExifTool because it was easier to write the code rather than describe how it ought to be done! You can see the code in trunk.
I spoted the source file where the actual renaming takes place, but I'm still looking for the location of the code responsible for handling the user interface and saving into the preferences... I'd be glad to prepare a patch if developers agree and point me to the right direction....
|
https://bugs.launchpad.net/rapid/+bug/754531
|
CC-MAIN-2017-26
|
refinedweb
| 1,270
| 71.24
|
1565931321
Originally published by Chidume Nnamdi at
Since the advent of Nodejs, command-line tools have been built using the JavaScript programming language, the most popular and widely used language in the world. There are many popular CLI tools built using JS:
These are popular CLI tools we have in npmjs.org. NPM is the world most popular Node package manager. You reading this, you must have worked with Nodejs, and there is no way you haven’t used the popular npm command with its helluva pile of sub-commands:
You see, Nodejs made us be able to write command-line tools in our fav language: JS. Without Nodejs, we will just write a cmd0line in C/C++ and compile to executable. We know the popular commands we use in cmd/bash:
These are executable files(.exe, .app, in Linux systems there is no file extension) in our system path, written in C/C++ and compiled to executable format. If we run the ls command to list files and folders in our current directory:
$ ls'autorun.inf/' 'System Information/'
The
ls file is forked and run, the results it returns is displayed on the cmd/bash/sh.
we can create our our program and run it in cmd-line:
// hello.cpp
#include <iostream>;int main() {
cout >> “Hello world”;
return 0;
}
On compilation, we will have the executable file
hello.exe. On execution in he cmd-line, it will display “Hello world” in the cmd:
$ hello
Hello world
You see, this is a CLI tool written in C/C++ but with Nodejs we can write the same in JavaScript, can you imagine that? A language we knew from the beginning of our programming journey that it was a browser-based language, can now be used to create CLI utilities that can be done in C/C++.
So, in this post, we will see how to write a CLI tool in JS.
First, you need to download the Nodejs program and install it. Open your browser and navigate to and there on the main page you can download the version of Node.js that corresponds to your platform.
Next, we have to initialize a Node project:
mkdir node-cli
cd node-cli
npm init -y
This app will mimic our
hello.cpp program it will just output
“Hello world” in the terminal.
We create an
index.js file
touch index.js
Open it and pour in the below contents:
// index.js
console.log(“Hello world”);
If we run
node . in our terminal, we will see the contents display:
$ node .
Hello world
We added
. to the
node command because the entry point is the
index.js so Node will know to run the
index.js file which is the default file for any Node project. If we create a file like
hello.js, to run it we must specify its name in the node command:
node hello or
node hello.js.
I know, a
Hello world program isn’t ideal to demonstrate a concept but this isn’t about the example it is about how to make a shell-command line program in Nodejs.
Our program is called with the node command, we want it to be called like a regular shell command like how the Angular CLI is called:
ng.
It is very easy to achieve. First we add
#!/usr/bin/env node to the
index.js file:
// index.js
#!/usr/bin/env nodeconsole.log(“Hello world”);
This makes it possible to execute this project without the
node command. See it has already been added in the command
#!/usr/bin/env node.
Next, we need to update our
package.json file:
// package.json
{
name: “node-cli”,
// …
bin: “./index.js”
// …
}
See we added a bin property with the value pointed to the
index.js file. The bin prop specifies command names for a node project, but as we have a single file the
name property will be used as the shell command name.
Now our app is not globally available like we cannot run node-cli in our shell to run the project. But we don’t want the CLI name to be node-cli, we want it to be hello, so we can run it like this:
hello. To do that we will change the name property in the package.json to
hello.
// package.json
{
name: “hello”,
// …
bin: “./index.js”
// …
}
To make our project available globally we would create a symlink using the npm link command.
We run:
npm link
yarn link
in our project. This command creates a symbolic link between the project directory and executable command.
In Windows, you will see
hello.cmd created in
~/Admin/AppData/LocalRoaming/npm, the hello.cmd is the file that will be run when we type hello in cmd/bash/sh in any directory.
To test it, make sure you are not in the project directory. Open cmd/bash/sh and type
hello:
$ helloHello world
Boom!!! our Node app works globally in our machine!
To be able for users to install our node project and use it globally in the machine, we have to push it to NPM.
Before pushing our project, we need to bet or ignore some files or folders,
node_modules is the main folder that is usually ignored because when the dependencies in the
node_modules will be installed when the users pull in our project.
Since we have not installed any dependencies so we have no
node_modules folder, but if to say we have it. To ignore it is easy just create
.npmignore file and add the
node_modules folder in it.
// .npmignore
node_modules/
We can then push our project to NPM:
npm publish
yarn publish+ hello@0.0.1
Users can install our hello project globally:
npm i hello -g
The user can run the
hello command in the shell like this:
$ helloHello world
A node project can have many commands. Our
hello project has one shell command
hello, we can add many commands to it.
We can add commands like:
sayName: says my name
“Nnamdi Chidume”and my details like
age,
career.
tip: gives a tip of the day.
today: prints today’s date.
To do that we will edit our
package.json
// package.json
{
name: “hello”,
// …
bin: “./index.js”
// …
}|
|
v// package.json
{
name: “hello”,
// …
bin: {
“hello”: “./index.js”,
“sayName”: “./sayName.js”,
“tip”: “./tip.js”,
“today”: “./today.js”
}
// …
}
The
bin property is now an object that holds the command and its corresponding file to call.
Now, let’s create the files:
touch sayName.js
touch tip.js
touch today.js
Then, we flesh them out.
#!/usr/bin/env node
// sayName.jsconsole.log(“Im Chidume Nnamdi\n”)
console.log(“I am a Software Developer”)#!/usr/bin/env node
// tip.jsconsole.log(“Never give up”)
console.log(“There is light at the end of the tunnel”)
#!/usr/bin/env node
// today.jsconsole.log("Today’s date is " + Date.now().toLocaleString())
Now our node-cli project folder would look like this:
node-cli
index.js
sayName.js
tip.js
today.js
package.json
.npmignore
Run the npm link to create symlinks for our new commands. You can also publish the new updates to NPM to add the new commands.
After done with that, our users can use our commands as shell commands:
$ hello
Hello world
$ sayName
I’m Chidume Nnamdi
I am a Software Developer
$ tip
Never give up
There is light at the end of the tunnel
$ today
Today’s date is Fri 02/07/2019
See we demonstrated how to create a command-line tool in JS using Nodejs. It is very easy, just a couple of tweaks you have your CLI.
I know I didn’t demonstrate with a real-world example, I only explained and demoed how to configure to get your CLI. With the knowledge, create your real-world example and make it a CLI with the tweaks you learned here.
If you have any question regarding this or anything I should add, correct or remove, feel free to comment, email or DM me.
Thanks !!! Best 50 Nodejs interview questions from Beginners to Advanced in 2019
☞ Node.js 12: The future of server-side JavaScript
☞
☞ Creating your first npm package
☞ Top 10 npm Security Best Practices
☞ How to publish a React Native component to NPM
#node-js #npm 05178380
In this video I will show you how to turn a Node.js application into a command line tool application.
#command line #node.js #cli #command line interface #template generator #node.js cli
|
https://morioh.com/p/2715e7025cf0
|
CC-MAIN-2021-39
|
refinedweb
| 1,414
| 75.1
|
- .7 Nested Resources
Let's say you want to perform operations on bids: create, edit, and so forth. You know that every bid is associated with a particular auction. That means that whenever you do anything to a bid, you're really doing something to an auction/bid pair—or, to look at it another way, an auction/bid nest. Bids are at the bottom of a drill-down hierarchical structure that always passes through an auction.
What you're aiming for here is a URL that looks like
/auctions/3/bids/5
What it does depends on the HTTP verb it comes with, of course. But the semantics of the URL itself are: the resource that can be identified as bid 5, belonging to auction 3.
Why not just go for bids/5 and skip the auction? For a couple of reasons. First, the URL is more informative—longer, it's true, but longer in the service of telling you something about the resource. Second, thanks to the way RESTful routes are engineered in Rails, this kind of URL gives you immediate access to the auction id, via params[:auction_id].
To created nested resource routes, put this in routes.rb:
resources :auctions do resources :bids end
What that tells the mapper is that you want RESTful routes for auction resources; that is, you want auctions_url, edit_auction_url, and all the rest of it. You also want RESTful routes for bids: auction_bids_url, new_auction_bid_url, and so forth.
However, the nested resource command also involves you in making a promise. You're promising that whenever you use the bid named route helpers, you will provide a auction resource in which they can be nested. In your application code, that translates into an argument to the named route method:
link_to "See all bids", auction_bids_path(auction)
When you make that call, you enable the routing system to add the /auctions/3 part before the /bids part. And, on the receiving end—in this case, in the action bids/index, which is where that URL points—you'll find the id of auction in params[:auction_id]. (It's a plural RESTful route, using GET. See Table 3.1 again if you forgot.)
You can nest to any depth. Each level of nesting adds one to the number of arguments you have to supply to the nested routes. This means that for the singular routes (show, edit, destroy), you need at least two arguments:
link_to "Delete this bid", auction_bid_path(auction, bid), :method => :delete
This will enable the routing system to get the information it needs (essentially auction.id and bid.id) in order to generate the route.
If you prefer, you can also make the same call using hash-style method arguments, but most people don't because it's longer code:
auction_bid_path(:auction => auction, :bid => bid)
3.7.1 RESTful Controller Mappings
Something we haven't yet explicitly discussed is how RESTful routes are mapped to a given controller. It was just presented as something that happens automatically, which in fact it does, based on the name of the resource.
Going back to our recurring example, given the following nested route:
resources :auctions do resources :bids end
there are two controllers that come into play, the AuctionsController and the BidsController.
3.7.2 Considerations
Is nesting worth it? For single routes, a nested route usually doesn't tell you anything you wouldn't be able to figure out anyway. After all, a bid belongs to an auction.
That means you can access bid.auction_id just as easily as you can params[:auction_id], assuming you have a bid object already.
Furthermore, the bid object doesn't depend on the nesting. You'll get params[:id] set to 5, and you can dig that record out of the database directly. You don't need to know what auction it belongs to.
Bid.find(params[:id])
A common rationale for judicious use of nested resources, and the one most often issued by David, is the ease with which you can enforce permissions and context-based constraints. Typically, a nested resource should only be accessible in the context of its parent resource, and it's really easy to enforce that in your code based on the way that you load the nested resource using the parent's Active Record association.
auction = Auction.find(params[:auction_id]) bid = auction.bids.find(params[:id]) # prevents auction/bid mismatch
If you want to add a bid to an auction, your nested resource URL would be
The auction is identified in the URL rather than having to clutter your new bid form data with hidden fields or resorting to non-RESTful practices.
3.7.3 Deep Nesting?
Jamis Buck is a very influential figure in the Rails community, almost as much as David himself. In February 2007, via his blog,2 he basically told us that deep nesting was a bad thing, and proposed the following rule of thumb: Resources should never be nested more than one level deep.
That advice is based on experience and concerns about practicality. The helper methods for routes nested more than two levels deep become long and unwieldy. It's easy to make mistakes with them and hard to figure out what's wrong when they don't work as expected.
Assume that in our application example, bids have multiple comments. We could nest comments under bids in the routing like this:
resources :auctions do resources :bids do resources :comments end end
Instead, Jamis would have us do the following:
resources :auctions do resources :bids end resources :bids do resources :comments end resources :comments
Notice that each resource (except auctions) is defined twice, once in the top-level namespace, and one in its context. The rationale? When it comes to parent-child scope, you really only need two levels to work with. The resulting URLs are shorter and the helper methods are easier to work with.
auctions_path # /auctions auctions_path(1) # /auctions/1 auction_bids_path(1) # /auctions/1/bids bid_path(2) # /bids/2 bid_comments_path(3) # /bids/3/comments comment_path(4) # /comments/4
I personally don't follow Jamis's guideline all the time in my projects, but I have noticed that limiting the depth of your nested resources helps with the maintainability of your codebase in the long run.
3.7.4 Shallow Routes
As of Rails 2.3 resource routes accept a :shallow option that helps to shorten URLs where possible. The goal is to leave off parent collection URL segments where they are not needed. The end result is that the only nested routes generated are for the :index, :create, and :new actions. The rest are kept in their own shallow URL context.
It's easier to illustrate than to explain, so let's define a nested set of resources and set :shallow to true:
resources :auctions, :shallow => true do resources :bids do resources :comments end end
alternatively coded as follows (if you're block-happy)
resources :auctions do shallow do resources :bids do resources :comments end end end
The resulting routes are:
GET /auctions(.:format) auctions POST /auctions(.:format) new_auction GET /auctions/new(.:format) GET /auctions/:id(.:format) PUT /auctions/:id(.:format) auction DELETE /auctions/:id(.:format) edit_auction GET /auctions/:id/edit(.:format) GET /auctions/:auction_id/bids(.:format) auction_bids POST /auctions/:auction_id/bids(.:format) new_auction_bid GET /auctions/:auction_id/bids/new(.:format) GET /bids/:bid_id/comments(.:format) bid_comments POST /bids/:bid_id/comments(.:format) new_bid_comment GET /bids/:bid_id/comments/new(.:format GET /comments/:id(.:format) PUT /comments/:id(.:format) comment DELETE /comments/:id(.:format) edit_comment GET /comments/:id/edit(.:format)
If you analyze the routes generated carefully, you'll notice that the nested parts of the URL are only included when they are needed to determine what data to display.
|
http://www.informit.com/articles/article.aspx?p=1671632&seqNum=7
|
CC-MAIN-2015-22
|
refinedweb
| 1,290
| 64.3
|
14 April 2011 07:28 [Source: ICIS news]
By Ross Yeo and Felicia Loo
LONDON/SINGAPORE (ICIS)--European spot prices of methyl tertiary butyl ether (MTBE) surged to a record high this week, on the back of robust global oil futures, tight supply and firm demand, triggering a price rally in ?xml:namespace>
Domestic prices in
In Europe, MTBE prices soared to $1,438-1,460/tonne (€992-1.007/tonne) FOB (free on board).
Global Brent crude futures are hovering near $123/bbl on Thursday amid a civil war in OPEC member
Even the leading
“Supplies are drained in
MTBE CFR (cost & freight) China prices at $1,170-1,250/tonne, already lifted following
“There are hardly any [spot] imports and it’s a difficult situation,” another trader said.
At this juncture, MTBE plant maintenance was scarce in.
Signs of slowing car growth rate to single-digit levels could temper demand, but the impact might not be apparent until a few months down the road, market players said.
Passenger car sales reached 1.35m units in March and 3.84m units in the first quarter, up 6.52% and 9.07% year on year respectively, CAAM data showed.
The exit of stimulation policies, rising fuel prices, restrictions on car buying in first-tier cities such as Beijing and Shanghai, stricter emission rules and the 11 March earthquake and tsunami in northeast Japan slowed down China’s car market expansion in both March and the first quarter, the association said.
Meanwhile, in
In addition to the usual seasonal increase in blending demand during summer, sources identified two unanticipated factors which were contributing to the supply imbalance.
The first was the arbitrage westwards across the Atlantic, which has resulted in high levels of exports and is supported by strong demand from Venezuela, as well as a scheduled turnaround at one of LyondellBasell’s two Channelview, Texas units, removing around 12,000 bbl/day for up to two months from April.
The second factor is the strong MTBE consumption by the German market which followed the unpopular introduction of E10 gasoline (10% ethanol) in February this year.
The market had previously expected E10, which replaced regular RON95 gasoline blends at German pumps, to destroy MTBE demand in favour of ethanol.
Yet, the poor public reception of the high-ethanol blend, largely borne out of concerns over potential vehicle damage, as well as the lower energy content of ethanol, has led to a boost in demand for the only other blend of gasoline usually available – ‘super’ RON98, which contains high levels of ether such as MTBE or ethyl tertiary butyl ether (ETBE).
“I think everyone will have taken the high summer demand into account, but it’s these other two factors which no one expected and are really tightening the market,” said a producer.
Despite the extraordinarily high prices, no significant increase in imports has yet been seen and sources said it was unclear whether or not the high European prices would attract Middle Eastern volumes.
While prompt values are high, the paper market is steeply backwardated, increasing the risk that prices will have decreased by the time shipments arrive, thus exposing importers to significant potential losses. This risk is likely to keep a limit on the volumes imported until the pricing curve levels out somewhat, sources said.
“People talk about [increased imports from the
Further increasing the risks of importing are the high crude oil prices caused by the turmoil in North Africa and the
Although European gasoline pricing has been comparatively stable in the last week or so, levels are still high and contribute significantly to the price of MTBE. Should the political situation in the region stabilise, particularly in
“Of course everything is related back to crude, so the whole geopolitical scene will affect a lot of products, MTBE included,” said a second producer.
($1 = €0.69 / $1 = CNY6.53)
|
http://www.icis.com/Articles/2011/04/14/9452426/record-europe-mtbe-triggers-asia-rally-china-buyers-struggle.html
|
CC-MAIN-2015-14
|
refinedweb
| 650
| 53.75
|
It looks like you are visiting us from . Would you like to be taken to our local website?
Questions?
+1 802 861 2300
info@onlogic.com
Ordering
OnLogic offers consumer and corporate customers, as well as government and educational institutions, easy purchasing through our online store. Online we accept: VISA, Master Card, American Express, Discover Card and PayPal.@onlogic.com or call us at +1 802 861 2300.
Shipping.. OnLogic does not process or ship orders on weekends or national holidays. OnLog OnLogic to customer upon OnLogic's delivery of the Products to the carrier.
International Customers
Orders shipped internationally are subject to customs duties, local taxes, and/or brokerage fees. DHL is the only shipping method available to countries other than the U.S. and Canada. The fees are explained below:
- Customs Duties – A tax on the import of goods
- Local Taxes – A tax on the import of goods by the local state or province
- Brokerage Fees – Fees paid to a broker for processing a shipment across the border. Brokerage fees may be included in the shipping costs depending on the shipping method you choose. Any taxes/duties will be billed to the receiver. Shipping costs do not include the cost of any taxes or duties.
Note: Due to the cost of administering returns, all non-US sales are final. For warranty returns, International customers must bear the shipping costs to and from OnLogic and any duties, taxes, and/or brokerage fees associated with the return (RMA).
Canadian Customers
UPS will charge Can may take longer.
Canadian Shipping Estimate
- UPS Express — next day
- UPS Expedited — 2–3 days
- UPS Standard — up to 7 days
DHL and USPS estimates require customers to create an account with OnLogic and be logged in order to assure estimate accuracy. Delivery times are not guaranteed, and we do not split orders if items are not available. Please check to confirm everything in your order is correct and working within 7 days of receipt. For information see our Warranty & Returns policy.
Free Shipping
OnLogic offers Free Economy Shipping within the United States on all online orders of $250 or more..
Military/APO Orders
OnLogic offers shipping to APO/FPO addresses by U.S. Mail (USPS). This option is available when you enter your APO/FPO address as the shipping address during checkout.
|
https://www.onlogic.com/company/support/ordering-and-shipping/
|
CC-MAIN-2020-16
|
refinedweb
| 387
| 55.74
|
Q: Why do I need a null test before I invoke a delegate?
A: If you have an event in a class, you need to add a null test before you call the delegate. Typically, you would write:
if (Click != null) Click(arg1, arg2);
There is actually a possible race condition here – the event can be cleared between the first and second line. You would actually want to write:
ClickHandler handler = Click; if (handler != null) handler(arg1, arg2);
usually..
[Author: Eric Gunnerson]
Join the conversationAdd Comment
Not finished?
I think the confusing part is that with events, you can have one or three or zero handlers, but it seems that someone goes through a lot of extra trouble to make the zero case behave like null. You would not, for example, expect a zero-length array to behave like null.
And can we really depend on the copying (it is by value, right?) to be atomic?
The assignment is simply a copy of the reference to the delegate instance, not a by-value copy. It caches a reference in case another thread (or piece of re-entrant code) nulls out the reference before it invokes it.
Yes, I believe you can count on the assignemnet opperation being atomic. I’m missing half the article, but can only assume that you are talking aobut doing something like this:
MyDelegate myTempHandler = myHandler;
if (myTempHandler != null) {
myTempHandler(arg1, arg2);
}
Copying the event before testing and invoking isn’t always good enough, in the multithreaded case. Sometimes you want a client to unregister its event handler, and to be sure that the event handler isn’t called on the source line after the "unregister" call. (Perhaps the client will kill some resource used by the event handler, for instance.) That won’t work with copying before testing and invoking.
If you change the behaviour of the event handler to never be null, how would it break existing code? Old clients would keep their null tests, new client don’t have to add them.
I agree with Luke. It’s strange that you can do a += operation on a null, then it’s not null anymore, and if you do a -= enough times, then you suddenly have a null.
Why don’t you just treat the event like the collection of listeners it is???
Why can’t there be multicast delegates with an empty invocation list?
If multicast delegates with empty invocation lists existed, we could initialize up our member variables with them. Then, since we could prove that the member variables were never null, we wouldn’t have to test for it.
For example:
public class C
{
// Won’t compile, but I sure wish it would.
public EventHandler click = new EventHandler();
public FireClick()
{
click(this, null);
}
}
Ray, you could do something similar with anonymous delegates. Just initially add an empty method inline as a handler at creation and never remove it. I wouldn’t personally use it in my own code due to the fact that it would be inefficient at runtime since you’d always be invoking the empty method for no reason, but it would make the code everywhere else cleaner and be a nifty trick 😉
I was all excited about this new idea I just had, but it turns out that Kael beat me by 2 months.
I’m not concerned about the perf hit, until I get some perf data back that says I should be.
my web:
[][][][][][][][][][][][][][][][][][][]
PingBack from
|
https://blogs.msdn.microsoft.com/csharpfaq/2004/03/19/why-do-i-need-a-null-test-before-i-invoke-a-delegate/
|
CC-MAIN-2017-30
|
refinedweb
| 580
| 71.24
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.