text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Well,–*
Join the conversationAdd Comment
PingBack from
I had my music set up in folders structures (e.g. Rock –> Classic Rock etc) on my machine based on my liking but could not find easy way to create zune playlists based on folder structure so I wrote a quick tool which would do this… I made it available to be downloaded for free at, you might find it interesting…
can you post the source code for this project?
Hi, Dan,
Yes, I will do that tonight when I get home. (I’ve actually changed it slightly to make it a little more efficient.)
–Matt–*
Thanks that will be really cool!
Done!
–Matt–*
How do you add songs to the play list. i tried drag and drop other than that i don’t know what to do. i browsed the code and i don’t see anything that could be sued for handling adding songs
"could be *used* for handling adding songs"
Hi, Dan,
Sorry for any confusion — "could" in my blog posts means "if you take the concepts you’ve leanred in these blogs, you could extend it to do even more." Mostly, with these blog posts, I’m trying to introduce the various concepts (or SDKs on occasion) by applying them to a particular problem I’m trying to solve at home. In my case, I generally add songs via the WMP (since I need to do this anyway for my home stereo system), and then use this program to rearrange them in an order that’s not offensive to my ears when playing the songs back on the Zune or on Sync. So, that’s as much as I do with this program.
That being said, almost every concept you need to add arbitrary songs to a playlist is in this program and in the three blogs that relate to it — opening of a new or existing playlist and adding media items to it, and I discuss those a bit in the other two blogs (links to them are in the above article) which had the original code. The one missing element is simply the creation of the media object representing the file (i.e., explicitly associating it with the file — but that’s just another method in the class namespace which is (I believe) associated with either playlist or library object.
Hope this clears things up!
–Matt–*
I just made a play list in WMP and then renamed it to .zpl so that worked fine. I could have added my own code to make it work like that but then i remembered that i am lazy, so i went and did it in WMP. But thank you for the source code. i am sure i can find some use for it eventually.
Yep, the real point of this post was just to provide an update to the earlier two posts, which dealt with customizing playlists, so that you could copy them to Zune playlists at the same time that you reordered the lists. If you don’t care about playlist ordering, then simple renaming or copying is the way to go.
–Matt–*
I downloaded some old time radio shows from the net. When they downloaded they were wpl files, so I right clicked each one and said "Open With" Zune. That apparently changed them to MP3’s because they downloaded to My Documents folder with file extensions of .mp3…….so here’s my problem……when I click on the file it opens with Zune and plays normally in the "Now Playing" area of the Zune Software. When I got to save as a new playlist it saves it and the title is there, but when I try to sync it to the Zune it says the file is empty (no megabytes) Can you offer any help on how to get these files to sync with my Zune? I’ve already checked my zune settings and it should accept a mp3 file…I don’t understand….please help!
Hi, Alicia,
I’d like to help, but I’m not quite understanding the question. WPL files are playlist files, which simply contain references to media files — there’s no chance of a WPL being converted to an MP3 file itself. I just tried this myself, and when I do an "Open with Zune" on a WPL file, all that happens is that Zune launches, but it does nothing — it doesn’t actually know what to do with a WPL file. So, I’m wondering if you had some typos in your question?
Here’s what I would have expected — I would download a set of MP3 files (or maybe WMA files), which was perhaps accompanied by a WPL playlist. I’d change the WPL extension to a ZPL extension. I’d launch the Zune software and make sure that it detected the new files (some people have their Zune software set up for manual detection rather than automatic, for example). Then I’d sync my device — both the MP3’s and the ZPL file would get pushed to the device.
Let me know if you have more info. Given that, I may or may not be able to answer (I’m not on the Zune team, and I don’t know much more than anyone else about it.)
–Matt–*
P.S. I like old time radio shows, too. I’m halfway through a volume of "Fibber McGee and Molly" shows that my wife got me for Christmas — they’re quite fun!
|
https://blogs.msdn.microsoft.com/vbteam/2008/09/20/building-a-zune-playlist-matt-gertz/
|
CC-MAIN-2016-30
|
refinedweb
| 921
| 74.32
|
In WPF, you bind to objects returned by web service method calls the same way you bind to any other objects. To demonstrate, we’ll walk through a simple application that consumes the MSDN/TechNet Publishing System (MTPS) Content Service, discussed here. Our application implements a very simple scenario that retrieves the list of languages supported by a given document.
Create a Reference to the Web Service
The first step is to create a reference to the MTPS web service. To create a web reference using Visual Studio:
Call the Web Service and Set the DataContext
We are now ready to call the GetContent web service method and bind to the returned object. All we need to do is call the web service and set the DataContext property of the appropriate window or control to the object returned by the web service method. In this app, we provide a specific contentIdentifier for our simple request. (If you are interested in creating other requests, check out the MTPS Web Services document.)
The following code shows you how to do that:
// 1. Include the web service namespace
using BindtoContentService.com.microsoft.msdn.services;
. . .
// 2. Set up the request object
// To use the MSTP web service, we need to configure and send a request
// In this example, we create a simple request using the ID of the XmlReader.Read method page
getContentRequest request = new getContentRequest();
request.contentIdentifier = "abhtw0f1";
// 3. Create the proxy
ContentService proxy = new ContentService();
// 4. Call the web service method and set the DataContext of the Window
// (GetContent returns an object of type getContentResponse)
this.DataContext = proxy.GetContent(request);
Create the Bindings
Now that the DataContext has been set, we can create the bindings. In the following XAML, we bind the TextBlock to the contentGuid. We want the ItemsControl to show the list of available languages of the requested content, and so we have the ItemsControl bind to and display the locale values of availableVersionsAndLocales. Take a look at this document to see the structure of getContentResponse.
<TextBlock Text="Content Guid:"/>
<TextBlock Text="{Binding Path=contentGuid}" />
<TextBlock Text="Available Locales:"/>
<ItemsControl ItemsSource="{Binding Path=availableVersionsAndLocales}"
DisplayMemberPath="locale"/>
Here’s a screenshot of the app, which you can find in the attachment section of this post:
Enjoy!
|
http://blogs.msdn.com/wpfsdk/archive/2007/05/16/binding-to-web-services.aspx
|
crawl-002
|
refinedweb
| 375
| 53.41
|
#include <stdio.h> #include <stdarg.h> int vprintf(const char *format, va_list ap);
int vfprintf(FILE *stream, const char *format, va_list ap);
int vsprintf(char *s, const char *format, va_list ap);
int vsnprintf(char *s, size_t n, const char *format, va_list ap);
int vasprintf(char **ret, const char *format, va_list ap);
The vprintf(), vfprintf(), vsprintf(), vsnprintf(), and vasprintf() functions are the same as printf(), fprintf(), sprintf(), snprintf (), and asprintf(), respectively, except that instead of being called with a variable number of arguments, they are called with an argument list as defined in the stdarg.h header. See printf(3C).
The stdarg.h header defines the type va_list and a set of macros for advancing through a list of arguments whose number and types may vary. The argument ap to the vprint family of functions is of type va_list. This argument is used with the <stdarg.h> header file macros va_start(), va_arg(), and va_end() (see stdarg(3EXT)). The EXAMPLES section below demonstrates the use of va_start() and va_end() with vprintf().
The macro va_alist() is used as the parameter list in a function definition, as in the function called error() in the example below. The macro va_start(ap, name), where ap is of type va_list and name is the rightmost parameter (just before . . .), must be called before any attempt to traverse and access unnamed arguments is made. The va_end(ap) macro must be invoked when all desired arguments have been accessed. The argument list in ap can be traversed again if va_start() is called again after va_end(). In the example below, the error() arguments (arg1, arg2, …) are passed to vfprintf() in the argument ap.
Refer to printf(3C).
The vprintf() and vfprintf() functions will fail if either the stream is unbuffered or the stream's buffer needed to be flushed and:
The file is a regular file and an attempt was made to write at or beyond the offset maximum.
The following demonstrates how vfprintf() could be used to write an error routine:
#include <stdio.h> #include <stdarg.h> . . . /* * error should be called like * error(function_name, format, arg1, …); */ void error(char *function_name, char *format, …) { va_list ap; va_start(ap, format); /* print out name of function causing error */ (void) fprintf(stderr, "ERR in %s: ", function_name); /* print out remainder of message */ (void) vfprintf(stderr, format, ap); va_end(ap); (void) abort(); }
See attributes(5) for descriptions of the following attributes:
All of these functions can be used safely in multithreaded applications, as long as setlocale(3C) is not being called to change the locale.
See standards(5) for the standards conformance of vprintf(), vfprintf(), vsprintf(), and vsnprintf(). The vasprintf() function is modeled on the one that appears in the FreeBSD, NetBSD, and GNU C libraries.
printf(3C), attributes(5), stdarg(3EXT), attributes(5), standards(5)
The vsnprintf() return value when n = 0 was changed in the Solaris 10 release. The change was based on the SUSv3 specification. The previous behavior was based on the initial SUSv2 specification, where vsnprintf() when n = 0 returns an unspecified value less than 1.
|
http://docs.oracle.com/cd/E36784_01/html/E36874/vasprintf-3c.html
|
CC-MAIN-2014-42
|
refinedweb
| 503
| 59.13
|
Draw circle, rectangle, line etc with Python, Pillow
ImageDraw module of the Python image processing library Pillow (PIL) provides a number of methods for drawing figures such as circle, square, and straight line.
Please refer to the following post for the installation and basic usage of Pillow (PIL).
Flow of drawing figures
Create Draw Object
Prepare an
Image object of a background image (image for drawing a figure) and use it to create a
Draw object. Don't forget to import
Image and
ImageDraw.
from PIL import Image, ImageDraw im = Image.new('RGB', (500, 300), (128, 128, 128)) draw = ImageDraw.Draw(im)
Here, create a solid image with
Image.new(). The mode, size, and fill color are specified in parameters.
Draw a shape with the drawing method
Call the drawing method from the
Draw object to draw a figure.
Draw an ellipse, a rectangle, and a straight line as an example. The parameters will be described later.
draw.ellipse((100, 100, 150, 200), fill=(255, 0, 0), outline=(0, 0, 0)) draw.rectangle((200, 100, 300, 200), fill=(0, 192, 192), outline=(255, 255, 255)) draw.line((350, 200, 450, 100), fill=(255, 255, 0), width=10) im.save('data/dst/pillow_imagedraw.jpg', quality=95)
Drawing method
Common parameters
Although the method differs depending on the method, the following parameters are common.
xy
Set a rectangular area to draw a figure.
Specify in one of the following formats:
(((Upper left x coordinate, upper left y coordinate), (lower right x coordinate, lower right y coordinate))
(Upper left x coordinate, upper left y coordinate, lower right x coordinate, lower right y coordinate)
In
line(),
polygon(), and
point(), multiple coordinates are specified instead of two points representing a rectangular area.
(x1, y1, x2, y2, x3, y3...)
((x1, y1), (x2, y2), (x3, y3)...)
line() draws a straight line connecting each point,
polygon() draws a polygon where each point is connected , and
point() draws a point of 1 pixel at each point.
fill
Set the color to fill the shape.
The specification format differs depending on the mode of the image (
Image object).
RGB: Set each color value (0-255) in the form of
(R, G, B)
L(Grayscale): Set a value (0-255) as an integer
The default is
None (no fill).
outline
Set the border color of the figure.
The specification format of color is the same as
fill above. The default is
None (no border).
As of version
4.4.0, there is no option to set the line width (line thickness) other than
line().
Method example
See the official document for details.
Ellipse, rectangle
- Ellipse (Circle):
ellipse(xy, fill, outline)
- Rectangle (Square):
rectangle(xy, fill, outline)
ellipse() draws an ellipse tangent to the rectangular area specified by the argument
xy. Specifying a square results in a true circle.
The output results are as shown in the above example.
Line, polygon, point
- Line:
line(xy, fill, width)
xy
- Set multiple coordinates of two or more points as
((x1, y1), (x2, y2), (x3, y3)...).
- Lines connecting each point is drawn.
width: Line width (line thickness)
- Note that if you make the line width thicker with
width, specifying 3 points or more with
xywill make the connection look unattractive.
- Polygon:
polygon(xy, fill, outline)
xy
- Set multiple coordinates of three or more points as
((x1, y1), (x2, y2), (x3, y3)...).
- A polygon in which each point is connected is drawn.
- Point :
point(xy, fill)
xy
- Set multiple coordinates of one or more points as
((x1, y1), (x2, y2), (x3, y3)...).
- One pixel point is drawn for each points.
The example of lines (
line()), polygon (
polygon()), point (
point()) is as follows. Since the point is 1 pixel, it is hard to see but it is drawn on the right side.
im = Image.new('RGB', (500, 250), (128, 128, 128)) draw = ImageDraw.Draw(im) draw.line(((30, 200), (130, 100), (80, 50)), fill=(255, 255, 0)) draw.line(((80, 200), (180, 100), (130, 50)), fill=(255, 255, 0), width=10) draw.polygon(((200, 200), (300, 100), (250, 50)), fill=(255, 255, 0), outline=(0, 0, 0)) draw.point(((350, 200), (450, 100), (400, 50)), fill=(255, 255, 0))
Arc, chord, pie
An arc, a chord (bow), and a pie touching the rectangular area specified by the argument
xy are drawn.
- Arc:
arc(xy, start, end, fill)
start,
end
- Set the angle of the arc in degrees.
- 0 degrees is the direction of 3 o'clock. clockwise.
- Chord (bow) :
chord(xy, start, end, fill, outline)
- The start and end points of the arc are connected by a straight line.
- Pie :
pieslice(xy, start, end, fill, outline)
- The start and end points of the arc are connected by a straight line to the center of the circle.
Example of arc (
arc()), chord (
chord()), pie (
pieslice()) is as follows.
im = Image.new('RGB', (600, 250), (128, 128, 128)) draw = ImageDraw.Draw(im) draw.arc((25, 50, 175, 200), start=30, end=270, fill=(255, 255, 0)) draw.chord((225, 50, 375, 200), start=30, end=270, fill=(255, 255, 0), outline=(0, 0, 0)) draw.pieslice((425, 50, 575, 200), start=30, end=270, fill=(255, 255, 0), outline=(0, 0, 0))
Draw on image
In the previous examples, figures are drawn on the solid image generated by
Image.new(). If an existing image file is read by
Image.open(), it can be drawn on it.
im = Image.open('data/src/lena.jpg') draw = ImageDraw.Draw(im) draw.pieslice((15, 50, 140, 175), start=30, end=330, fill=(255, 255, 0))
|
https://note.nkmk.me/en/python-pillow-imagedraw/
|
CC-MAIN-2020-40
|
refinedweb
| 922
| 65.62
|
Regex search for \n and hit replace with the replace box empty.
Yes.
Nope. That's not it. I think that's what Ctrl+J is doing anyway, b/c on a 100k js file, ~3500 lines, both take the same amount of time, over 10 seconds, and peg ST3 CPU at 25% for me. (Win7, 3ghz, 6gb). It must be re-writing the whole string every time, after every line.
Nope, pretty sure you're gonna have to make yourself a "StringBuilder". That's the fastest way I know of to do it.For instance, the following simple demo code reads in the 100k file, splits it on linesep, and re-joins it with empty strings, and writes it out.It runs in about .5 seconds.Running as a plugin would be even faster, b/c you'd have no device I/O time. You'd just be using the buffer contents.Here's the code (Python 3.3.3):
import os
os.chdir('C:/users/dave')
f = open('C:/NodeJS/Apps/xxx_ServerX/build/xxServer/xxServer.js', 'r')
fil = f.read()
f.close()
#As a plugin, just get the active window buffer contents into a string variable...
filSplit = fil.splitlines() #<--- These 2 lines are what you need.
filJoined = ''.join(filSplit) #<--- ( '' ) is 2 single-quotes, not a double-quote
#then write it back to the buffer
f = open('jointest.txt', 'w')
f.write(filJoined)
f.close()
print("Done.")
I did a package search for something that would do this and did not find anything, but I did find this:sublime.wbond.net/packages/StringUtilitiesIf you're not up on writing plugins, (and I'm not, at the moment - been too long), then you might be able to just install that utility and then kludge the 2-line "StringBuilder" code into it, (by copying one of the other functions in the utility), and have yourself a quick out.(Unless anyone else knows of one.)
All that being said, you do know there are components/libraries like combo-izers and minifiers that already do this kind of thing for you, right? For instance, I use "shifter" to minify JS files... sort of thing.
Hope this helps.Dave
|
https://forum.sublimetext.com/t/how-to-join-lines-in-huge-file-very-fast/12086
|
CC-MAIN-2016-30
|
refinedweb
| 365
| 84.78
|
Uploading Files to Amazon S3 with a Rails API and Javascript Frontend
This guide will walk you through a method to integrate S3 hosting with Rails-as-an-API. I will also talk about how to integrate with the frontend. Note while some of the setup is focused on Heroku, this is applicable for any Rails API backend. There are many short guides out there, but this is intended to bring everything together in a clear manner. I put troubleshooting tips at the end, for some of the errors I ran into.
For this guide, I had a Rails API app in one working directory, and a React app in a different directory. I will assume you already know the basics of making requests between your frontend and backend, and assume that you know how to run them locally. This guide is quite long, and may take you a few hours to follow along with. If you would prefer not to use Medium, the original post is on my blog.
Background
We will be uploading the file straight from the frontend. One advantage of this is that it saves us on large requests. If we uploaded to the backend, then had the backend send it to S3, that will be two instances of a potentially large request. Another advantage is because of Heroku’s setup: Heroku has an “ephemeral filesystem.” Your files may remain on the system briefly, but they will always disappear on a system cycle. You can try to upload files to Heroku then immediately upload them to S3. However, if the filesystem cycles in that time, you will upload an incomplete file. This is less relevant for smaller files, but we will play it safe for the purposes of this guide.
Our backend will serve two roles: it will save metadata about the file, and handle all of the authentication steps that S3 requires. It will never touch the actual files.
The flow will look like this:
- The frontend sends a request to the Rails server for an authorized url to upload to.
- The server (using Active Storage) creates an authorized url for S3, then passes that back to the frontend.
- The frontend uploads the file to S3 using the authorized url.
- The frontend confirms the upload, and makes a request to the backend to create an object that tracks the needed metadata.
Steps 1 and 2 are in diagram 2.1. Steps 3 and 4 are diagrams 2.2 and 2.3, respectively.
Setting up S3
First, we will set up the S3 resources we want. Create two S3 buckets, prod and dev. You can let everything be default, but take note of the
bucket region. You will need that later.
Next, we will set up Cross-Origin Resource Sharing (CORS). This will allow you to make POST & PUT requests to your bucket. Go into each bucket,
Permissions ->
CORS Configuration. For now, we will just use a default config that allows everything. We will restrict it later.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration
Next, we will create some security credentials to allow our backend to do fancy things with our bucket. Click the dropdown with your account name, and select
My Security Credentials. This will take you to AWS IAM.
Once in the Identity and Access Management console, you should go to the access keys section, and create a new access key.
Here, it will create a key for you. It will never show you the secret again, so make sure you save these values in a file on your computer.
Rails API Backend
Again, I assume you know how to create a basic Rails API. I will be attaching my file to a
user model, but you can attach it to whatever you want.
Environment Variables
Add two gems to your
Gemfile:
gem 'aws-sdk-s3' and
gem 'dotenv-rails', then
bundle install. The first gem is the S3 software development kit. The second gem allows Rails to use a
.env file.
The access key and region (from AWS) are needed within Rails. While locally developing, we will pass these values using a
.env file. While on Heroku, we can set the values using
heroku config, which we will explore at the end of this guide. We will not be using a Procfile. Create the
.env file at the root of your directory, and be sure to add it to your gitignore. You don't want your AWS account secrets ending up on Github. Your
.env file should include:
AWS_ACCESS_KEY_ID=YOURACCESSKEY
AWS_SECRET_ACCESS_KEY=sEcReTkEyInSpoNGeBoBCaSe
S3_BUCKET=your-app-dev
AWS_REGION=your-region-1
Storage Setup
Run
rails active_storage:install. Active Storage is a library that helps with uploads to various cloud storages. Running this command will create a migration for a table that will handle the files' metadata. Make sure to
rails db:migrate.
Next, we will modify the files that keep track of the Active Storage environment. There should be a
config/storage.yml file. We will add an amazon S3 storage option. Its values come from our
.env file.
amazon:
service: S3
access_key_id: <%= ENV['AWS_ACCESS_KEY_ID'] %>
secret_access_key: <%= ENV['AWS_SECRET_ACCESS_KEY'] %>
region: <%= ENV['AWS_REGION'] %>
bucket: <%= ENV['S3_BUCKET'] %>
Next, go to
config/enviroments, and update your
production.rb and
development.rb. For both of these, change the Active Storage service to your newly added one:
config.active_storage.service = :amazon
Finally, we need an initializer for the AWS S3 service, to set it up with the access key. Create a
config/initializers/aws.rb, and insert the following code:
require 'aws-sdk-s3'
Aws.config.update({
region: ENV['AWS_REGION'],
credentials: Aws::Credentials.new(ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY']),
})
S3_BUCKET = Aws::S3::Resource.new.bucket(ENV['S3_BUCKET'])
We are now ready to store files. Next we will talk about the Rails model and controller setup.
Model
For my app, I am uploading user resumes, for the user model. You may be uploading images or other files. Feel free to change the variable names to whatever you like.
In my
user.rb model file, we need to attach the file to the model. We will also create a helper method that shares the file's public URL, which will become relevant later.
class User < ApplicationRecord
has_one_attached :resume
def resume_url
if resume.attached?
resume.blob.service_url
end
end
end
Make sure that the model does not have a corresponding column in its table. There should be no
resume column in my
user 's schema.
Direct Upload Controller
Next we will create a controller to handle the authentication with S3 through Active Storage. This controller will expect a POST request, and will return an object that includes a signed url for the frontend to PUT to. Run
rails g controller direct_upload to create this file. Additionally, add a route to
routes.rb:
post '/presigned_url', to: 'direct_upload#create'
The contents of the
direct_upload_controller.rb file can be found here.
The actual magic is handled by the
ActiveStorage::Blob.create_before_direct_upload! function. Everything else just formats the input or output a little bit. Take a look at
blob_params; our frontend will be responsible for determining those.
Testing
At this point, it might be useful to verify that the endpoint is working. You can test this functionality with something like curl or Postman. I used Postman.
Run your local server with
rails s, then you can test your
direct_upload#create endpoint by sending a POST request. There are a few things you will need:
- On a Unix machine, you can get the size of a file using
ls -l.
- If you have a different type of file, make sure to change the
content_typevalue.
- S3 also expects a “checksum”, so that it can verify that it received an uncorrupted file. This should be the MD5 hash of the file, encoded in base64. You can get this by running
openssl md5 -binary filename | base64.
Your POST request to
/presigned_url might look like this:
{
"file": {
"filename": "test_upload",
"byte_size": 67969,
"checksum": "VtVrTvbyW7L2DOsRBsh0UQ==",
"content_type": "application/pdf",
"metadata": {
"message": "active_storage_test"
}
}
}
The response should have a pre-signed URL and an id:
{
"direct_upload": {
"url": "",
"headers": {
"Content-Type": "application/pdf",
"Content-MD5": "VtVrTvbyW7L2DOsRBsh0UQ=="
}
},
"blob_signed_id": "eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaHBSQT09IiwiZXhwIjpudWxsLCJwdXIiOiJibG9iX2lkIn19--8a8b5467554825da176aa8bca80cc46c75459131"
}
The response
direct_upload.url should have several parameters attached to it. Don't worry too much about it; if there was something wrong you would just get an error.
Your direct upload now has an expiration of 10 minutes. If this looks correct, we can use the
direct_upload object to make a PUT request to S3. Use the same url, and make sure you include the headers. The body of the request will be the file you are looking to include.
You should get a simple empty response with a code of 200. If you go to the S3 bucket in the AWS console, you should see the folder and the file. Note that you can’t actually view the file (you can only view its metadata). If you try to click the “Object URL”, it will tell you Access Denied. This is okay! We don’t have permission to read the file. Earlier, in my
user.rb model, I put a helper function that uses Active Storage to get a public URL. We will take a look at that in a bit.
User Controller
If you recall our flow:
- The frontend sends a request to the server for an authorized url to upload to.
- The server (using Active Storage) creates an authorized url for S3, then passes that back to the frontend. Done.
- The frontend uploads the file to S3 using the authorized url.
- The frontend confirms the upload, and makes a request to the backend to create an object that tracks the needed metadata.
The backend still needs one bit of functionality. It needs to be able to create a new record using the uploaded file. For example, I am using resume files, and attaching them to users. For a new user creation, it expects a
last_name, and
signed_blob_id we saw earlier. Active Storage only needs this ID to connect the file to your model instance. Here is what my
users_controller#create looks like, and I also made a gist:
def create
resume = params[:pdf]
params = user_params.except(:pdf)
user = User.create!(params)
user.resume.attach(resume) if resume.present? && !!user
render json: user.as_json(root: false, methods: :resume_url).except('updated_at')
end
private
def user_params
params.permit(:email, :first_name, :last_name, :pdf)
end
The biggest new thing is the
resume.attach call. Also note that we are returning the json of the user, and including our created
resume_url method. This is what allows us to view the resume.
Your params may look different if your model is different. We can again test this with Postman or curl. Here is a json POST request that I would make to the
/users endpoint:
{
"first_name": "Test",
"last_name": "er",
"pdf": "eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaHBLdz09IiwiZXhwIjpudWxsLCJwdXIiOiJibG9iX2lkIn19--3fe2ec7e27bb9b5678dd9f4c7786032897d9511b"
}
This is much like a normal user creation, except we call
attach on the file ID that is passed with the request. The ID is from the response of our first request, the
blob_signed_id field. You should get a response that represents the user, but has a
resume_url field. You can follow this public url to see your uploaded file! This url comes from the
blob.service_url we included in the
user.rb model.
If this is all working, your backend is probably all set.
The Javascript Frontend
Remember our overall request flow. If we only consider the requests that the frontend performs, it will look like this:
- Make POST request for signed url.
- Make PUT request to S3 to upload the file.
- Make POST to
/usersto create new user.
We have already tested all of this using curl/Postman. Now it just needs to be implemented on the frontend. I am also going to assume you know how to get a file into Javascript from a computer.
<input> is the simplest method, but there are plenty of guides out there.
The only difficult part of this is calculating the checksum of the file. This is a little weird to follow, and I had to guess-and-check my way through a bit of this. To start with, we will
npm install crypto-js. Crypto JS is a cryptographic library for Javascript.
Then, we will read the file with
FileReader before hashing it, according to the following code. Here is a link to the corresponding gist.
import CryptoJS from 'crypto-js'
// Note that for larger files, you may want to hash them incrementally.
// Taken from
const md5FromFile = (file) => {
// FileReader is event driven, does not return promise
// Wrap with promise api so we can call w/ async await
//
return new Promise((resolve, reject) => {
const reader = new FileReader()
reader.onload = (fileEvent) => {
let binary = CryptoJS.lib.WordArray.create(fileEvent.target.result)
const md5 = CryptoJS.MD5(binary)
resolve(md5)
}
reader.onerror = () => {
reject('oops, something went wrong with the file reader.')
}
// For some reason, readAsBinaryString(file) does not work correctly,
// so we will handle it as a word array
reader.readAsArrayBuffer(file)
})
}
export const fileChecksum = async(file) => {
const md5 = await md5FromFile(file)
const checksum = md5.toString(CryptoJS.enc.Base64)
return checksum
}
At the end of this, we will have an MD5 hash, encoded in base64 (just like we did above with the terminal). We are almost done! The only thing we need are the actual requests. I will paste the code, but here is a link to a gist of the JS request code.
import { fileChecksum } from 'utils/checksum'
const createPresignedUrl = async(file, byte_size, checksum) => {
let options = {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
},
body: JSON.stringify({
file: {
filename: file.name,
byte_size: byte_size,
checksum: checksum,
content_type: 'application/pdf',
metadata: {
'message': 'resume for parsing'
}
}
})
}
let res = await fetch(PRESIGNED_URL_API_ENDPOINT, options)
if (res.status !== 200) return res
return await res.json()
}
export const createUser = async(userInfo) => {
const {pdf, email, first_name, last_name} = userInfo
// To upload pdf file to S3, we need to do three steps:
// 1) request a pre-signed PUT request (for S3) from the backend
const checksum = await fileChecksum(pdf)
const presignedFileParams = await createPresignedUrl(pdf, pdf.size, checksum)
// 2) send file to said PUT request (to S3)
const s3PutOptions = {
method: 'PUT',
headers: presignedFileParams.direct_upload.headers,
body: pdf,
}
let awsRes = await fetch(presignedFileParams.direct_upload.url, s3PutOptions)
if (awsRes.status !== 200) return awsRes
// 3) confirm & create user with backend
let usersPostOptions = {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({
first_name: first_name,
last_name: last_name,
pdf: presignedFileParams.blob_signed_id,
})
}
let res = await fetch(USERS_API_ENDPOINT, usersPostOptions)
if (res.status !== 200) return res
return await res.json()
}
Note that you need to provide the two global variables:
USERS_API_ENDPOINT and
PRESIGNED_URL_API_ENDPOINT. Also note that the
content_type.
You now have the required Javascript to make your application work. Just attach the
createUser method to form inputs, and make sure that
presigned_url endpoint, one to S3, and one to your API's user create endpoint. The final one will also return a public URL for the file, so you can view it for a limited time.
Final Steps and Cleanup
S3 Buckets
Make sure your prod app is using a different bucket from your development. This is so you can restrict its CORS policy. It should only accept PUT requests from one source: your production frontend. For example, here is my production CORS policy:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="">
<CORSRule>
<AllowedOrigin></AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
You don’t need to enable CORS for the communication between Rails and S3, because that is not technically a request, it is Active Storage.
Heroku Production Settings
You may have to update your Heroku prod environment. After you push your code, don’t forget to
heroku run rails db:migrate. You will also need to make sure your environment variables are correct. You can view them with
heroku config. You can set them by going to the app's settings in the Heroku dashboard. You can also set them with
heroku config:set AWS_ACCESS_KEY_ID=xxx AWS_SECRET_ACCESS_KEY=yyy S3_BUCKET=bucket-for-app AWS_REGION=my-region-1.
Public Viewing of Files
The public URL you receive to view the files is temporary. If you want your files to be permanently publicly viewable, you will need to take a few more steps. That is outside the realm of this guide.
Some Troubleshooting
Here are some errors I ran into while building this guide. It is not comprehensive, but may help you.
Problems with server initialization: make sure the names in your
.env files match the names where you access them.
Error: missing host to link to for the first request. In my case, this meant I had not put
:amazon as my Active Storage source in
development.rb.
StackLevelTooDeep for last request. I had this issue when calling
users_controller#create because I had not removed the "resume" field from my schema. Make sure your database schema does not include the file. That should only be referenced in the model with
has_one_attached.
AWS requests fail after changing CORS: make sure there are no trailing slashes in your URL within the CORS XML.
Debugging your checksum: this is a hard one. If you are getting an error from S3 saying that the computed checksum is not what they expected, this means there is something wrong with your calculation, and therefore something wrong with the Javascript you received from here. If you double check the code you copied from me and can’t find a difference, you may have to figure this out on your own. For Javascript, you can check the MD5 value by calling
.toString() on it with no arguments. On the command line, you can drop the
--binary flag.
Sources and References
Much of this was taken from Arely Viana’s blog post for Applaudo Studios. I linked the code together, and figured out how the frontend would look. A huge shout-out to them!
Here are some other resources I found useful:
- Heroku’s guide for S3 with Rails — this is not for Rails as an API, but it does talk about environment setup
- The code for Arely’s guide — also has some example JSON requests
- Rails Active Storage Overview
- Uploading to S3 with JS — this also uses AWS Lambda, with no backend
Originally published at on September 14, 2020.
|
https://eking-30347.medium.com/uploading-files-to-amazon-s3-with-a-rails-api-and-javascript-frontend-672a7f90ce05
|
CC-MAIN-2021-04
|
refinedweb
| 3,057
| 66.64
|
TL;DR
The
cron utility is a daemon that wakes up every minute to check if needs to run a job.
The Story of Cron
I found myself wondering recently how
cron works (
cron is a utility for scheduling tasks on Linux. For a good intro on how to use it, see this blog post). Is
cron doing something fancy to schedule these tasks? Or is it just constantly checking if it has something to do? It turns out that over its history, it has done both.
The Wikipedia page on
cron gives a thorough history which I'll summarize. It was originally written by Ken Thompson in the late 70's, and it did a simple check every minute. When mainframes with 100s of users tried using
cron, this check for every user's
cron jobs every minute required too many resources. Keith Williamson wrote a new version (SysV cron) in 1979 which borrowed ideas from a paper on "discrete event simulators" that had a queue. It put every user's jobs on the queue, and then slept until it was time to run the next job. It also woke up every 30 minutes to refresh the queue. This version was able to handle machines with many users with an acceptable amount of resources.
The modern implementation of
cron was written by Paul Vixie in 1987 and went back to checking every minute presumably because resources were no longer as scarce. From Debian's cron man page:
cron then wakes up every minute, examining all stored crontabs, checking each command to see if it should be run in the current minute.
As to what differentiates Vixie Cron from SysV cron, we get a hint from the Red-Hat package manager docs:
Vixie cron adds better security and more powerful configuration options to the standard version of cron.
We can see the 1 minute sleep loop in the vixie-cron source:
while (TRUE) { # if DEBUGGING if (!(DebugFlags & DTEST)) # endif /*DEBUGGING*/ cron_sleep(); load_database(&database); /* do this iteration */ cron_tick(&database); /* sleep 1 minute */ TargetTime += 60; }
Conclusion
I thought there may be a complicated system underlying
cron. However, as it turns out with most things when you get to the bottom of them, it is pretty straightforward.
|
https://pcarleton.com/2016/09/13/cron/
|
CC-MAIN-2021-39
|
refinedweb
| 374
| 68.91
|
add the messagebox, the first instance is still running and holding the semaphore (becuase the messagebox is displayed), so when the second instance tries to create it it fails as expected.
HANDLE hSem = CreateSemaphore( NULL, 1, 1, "MySemaphore" );
if (hSem != NULL && GetLastError() == ERROR_ALREADY_EXISTS) {
...msg that the semaphore exists...;
}
The reason you should check the value of hSem -- and not just GetLastError() -- is that some other error may occur (such as something related to SECURITY_ATTRIBUTES).
Be sure to see:
How to limit 32-bit applications to one instance in Visual C++
That clever object does it all... just instantiate it as a global in your main module and when it destructs it goes away so that another instance of the program (or any program that uses the same string as the name of the mutex) will then be able to run.
As to the VB dillybob, see:
How To Use SetWaitableTimer With Visual Basic
-- Dan
Mike
Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more.
If you are accessing an external IE window, you might consider instead using a WebBrowser control. Except in unusual situations, it will give you the same capabilities and it provides feedback on every operation -- such as after you do a Navigate() fn, you will get a call to your OnDocumentComplete() handler.
-=-==-=-=-=-=-=-=-=-=-==-=
In any case, I'm worried about that arbitrary 10-second delay -- Isn't there some way to get notified when the initialization is complete (rather than using the WaitableTimer) ? Won't everything crumble like a house of cards if the initialization takes 11 seconds or 20 seconds?
I also think that you may be complicating things unnecessarily. In general, these things are pretty simple -- You need a flag to tell each process (or thread) to go ahead or wait. If you don't want the process to wait, just don't set the flag :-)
-- Dan
Put another way, the call to GetLastError() should be placed immediately after the API call you are checking -- and then only if the API call return code indicates that the last error value will be valid.
-MAHESH
Two things can cause this.
1) As you guys been discussing premature close of semaphore.
To test this make a DuplicateHandle() to make sure the semaphore lives.
Something like this..
HANDLE hSemaphore = CreateSemaphore( NULL, 1, 1, "MySemaphore" );
if ( GetLastError() == ERROR_ALREADY_EXISTS )
{
// Display a message to the user and exit
}
else
{
HANDLE duplicateHandle = NULL; // local variable just for testing.... will cause handle leak.
DuplicateHandle(GetCurrent
// Run the program
}
2) Namespace issue - may be this is not your case.. still..
If you are running terminal server, windows do care about namespace.
All objets created from services will be in global namespace by default.
This means you can have two knl objects with the same name, one under global name sp and one under local name space
Thanks to you for the suggestions. I was finally able to resolve the problem on my own. I went back to the module that invoked my application that was having the semaphore problem. As I suggested it would invoke my module multiple times based on it's own input drivers. It turns out the invoking app was using a WinExec API call to start my app and this caused the problem. From the help documentation I found that WinExec does not return unitl that invoked app issues a GetMessage or until the invoked app terminates. As it happens my invoked app does not issue any GetMessage since it does not need to, there are no dialog windows. So in fact when my app was invoked a second or more times, each would only be run once the previous one had completed. As a result it would be correct to not return the ERROR_ALEADY_EXISTS return when creating the semaphore as the I changed the calling app to use CreateProcess instead of WinExec and that cleared up the problem. NOt I do get the ERROR_ALREADY_EXISTS as I expected I should, i.e. my code was correct all along. Thanks again.
|
https://www.experts-exchange.com/questions/21828865/CreateSemaphore-does-not-return-ERROR-ALREADY-EXISTS.html
|
CC-MAIN-2018-30
|
refinedweb
| 680
| 60.55
|
Reading vector from launch file does not work in Indigo
Hi to all,
I'm facing an issue while reading a vector parameter from launch file in ROS Indigo. I set the parameter in my launch file in the private namespace of my node as;
<param name="x0" value="[0.0, 0.0]"/>
And I'm trying to read it as described here or in this previous answer:
ros::NodeHandle nh("~"); std::vector<double> x0; nh.getParam("x0", x0);
but if I try to print the size of the vector it tells me 0. From the roslaunch param documentation it seems that parameters which are not scalar numbers or literal booleans are interpreted as strings.
Why is there such inconsistency between the types that I can read from the parameter server and the parameters that I can set from a launch file? How can I solve this problem?
Thanks to all in advance.
So just to clarify: does this work if you use:
Yes, in this way it works. Thanks, I didn't notice the comment in the previous answer. Still, is there any reason for this inconsistency?
|
https://answers.ros.org/question/291984/reading-vector-from-launch-file-does-not-work-in-indigo/
|
CC-MAIN-2021-31
|
refinedweb
| 188
| 72.16
|
CodePlexProject Hosting for Open Source Software
I'm a new white user and I'm using visual studio test edition. should I reference only white.core.dll in my test project or should I referene both core.dll and White.NUnit.dll? I think that I really need a simple starter walk through. I would really appreciate
your help.
You reference white.core.dll, White.Nunit.dll and nunit.framework.dll
Throndorin
Thank you very much. Another question:
When I try to use CustomUIItem, my compiler doesn't recognize [ControlTypeMapping(CustomUIItemType.Pane)], what else should I reference? where can I find some white test sample code?
[ControlTypeMapping(CustomUIItemType.Pane)]
public class MyDateUIItem : CustomUIItem
{
// Implement these two constructors. The order of parameters should be same.
public MyDateUIItem(AutomationElement automationElement, ActionListener actionListener)
: base(automationElement, actionListener)
{
}
//Empty constructor is mandatory with protected or public access modifier.
protected MyDateUIItem() { }
//Sample method
public virtual void EnterDate(DateTime dateTime)
{
//Base class, i.e. CustomUIItem has property called Container. Use this find the items within this.
//Can also use SearchCriteria for find items
Container.Get<TextBox>("day").Text = dateTime.Day.ToString();
Container.Get<TextBox>("month").Text = dateTime.Month.ToString();
Container.Get<TextBox>("year").Text = dateTime.Year.ToString();
}
}
mmh, I has no problems
works as expected, you need "using Core.UIItems.Custom;"as you can see this element is part of the White.Core.dllthis works for me:
using Core.UIItems.Custom;
namespace ManufacturerToolTest.Check
{
[ControlTypeMapping(CustomUIItemType.Pane)]
public class MyDateUIItem : CustomUIItem
{
}
}
which version of white do you use?
Hi I am also a new white user, need to know which project should i start with like, console application or windows application.
if we choose windows application then it creates problem for application class, which is present core.Application as well as in System.Windows.Forms
please suggest how should we resolve this...
Thanks...
I do need include Core.UIItems.Custom, thanks a lot.
private Core.Application whiteApplication;
private System.Windows.Forms.Application windowsApplication;
instead of Application so you can use both.
Hi, Throndorin, I'm using visual studio test edition which already has a test platform, should I still include White.Nunit.dll and nunit.framework.dll?
@timjiang
I have no experience using this version of Visual Studio, but my first gues is yes you should. The reason:
I think White.Nunit.dll and nunit.framework.dll are balanced so they should used together.
I don't think that the test edition comes with white.
And you can use newer versions from white as you want.
Without better ideas I would say the VS Version comes with own solutions for testing.
Maybe someone of the white team has more informations.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
https://white.codeplex.com/discussions/60964
|
CC-MAIN-2017-17
|
refinedweb
| 479
| 53.98
|
Some unconditionally reserved words are reserved only in strict code: class, enum, export, extends, import, super I am submitting this classified as "trivial" because these are properly rejected in strict code, which is all I really care about. If someone does care about this case, please raise the severity. If this is closed as a wont-fix, could we also have a tag like DeliberateSpecViolation (as v8 now has) so we can more easily keep track of such cases?
See bug 497869. We can try to reserve harder next time (or the time after that, with a quarterly release schedule) -- it will take more testing and possibly even cross-browser coordination. /be
Chrome and Opera are the only other engines that make these not keywords. IE never unreserved them, and I think WebKit didn't either (or at least in my testing back when they were reserved outside strict mode). I think this mostly just needs some lag time for Mozilla-centric sites (like the ones mentioned in that bug) that seemingly didn't care about cross-browser behavior to get fixed.
Has anyone filed a bug on V8? /be
Created attachment 516127 [details] [diff] [review] Patch, reenable the test Interestingly it seems the current code had a bug with just the token-type change. I didn't notice it when we initially changed this because I'd written a not-complete test at the time. But since then I landed bug 629187, which beefed up the test quite a bit, adding the new tests that fail without the jsparse.cpp changes in here. Hurray for more-complete tests finding failures! It'd be good to land this early this next cycle so anyone using these names as keywords can adapt, therefore doing this now rather than any later. If we can land when TM/m-c reopen to post-4.0 work that gets us the maximum nightly coverage possible, which can only be a good thing.
Created attachment 516384 [details] [diff] [review] Oops, missed some changes
(In reply to comment #6) > Interestingly it seems the current code had a bug with just the token-type > change Do you mean the parser enabling keyword-as-name for function names (identifiers after 'function' keywords and the 'function::' namespace)? That was intentional: revision 3.227 date: 2006/09/07 11:28:30; author: igor.bukanov%gmail.com; state: Exp; lines: +17 -3 Bug 343675: allow to use keywords as function names. r=brendan (CVS log entry.) Undoing this extension is a separate proposal from fixing this bug, unless I am missing something here that requires removing the extension. We don't have to sort this out here. New bug? I will review a focused patch quickly. /be
Yeah, that was it. I'll see what I can do. I think, with some effort, I can special-case those bits in the test that was detecting this instance of not allowing keywords in this one place. Back in a bit with that change, I hope.
Created attachment 516780 [details] [diff] [review] diff -w without function name bits
Filed bug 638667 for the function-name change.
Comment on attachment 516780 [details] [diff] [review] diff -w without function name bits Does this buy us spec conformance wins with sputnik or test262? It's not good for much more, but we may as well get it in and start breaking all the scofflaw JS content out there. /be
Yeah, looks like it's test262 (and based on the test names, Sputnik before it) wins here. Will blog, too, after I land this.
This should be mentioned in the Firefox 5 compatibility docs since it can and will break a number of add-ons. should be updated as well.
Hm... was this in Firefox 5? For some reason, it's flagged in our doc spreadsheet as being for Firefox 7.
Pretty sure this was 5 and changed not at all in 7.
OK, this had been documented already for Firefox 5, but was for some reason on our list for Firefox 7. I've checked the docs, and things are good.
|
https://bugzilla.mozilla.org/show_bug.cgi?id=637204
|
CC-MAIN-2017-22
|
refinedweb
| 685
| 72.36
|
Maximise XOR of a given integer with a number from the given range
Sign up for FREE 1 month of Kindle and read all our books for free.
Get FREE domain for 1st year and build your brand new site
Reading time: 15 minutes | Coding time: 5 minutes
Given q queries each of specifies three integers x, l, r. We have to find an integer from given range [l, r] inclusive, such that it gives maximum XOR with x. All values are assumed to be positive. We will show two ways to solve this interesting problem. One is the native approach which take O(R-L) time complexity and the other one is an efficient algorithm that takes only O(log X) time complexity.
Algorithm
To solve this problem we can use the properties of XOR:
Naive Approach
One way to solve this problem is just to take XOR of each element in the range with x and the maximum result will be the answer. But if the range is too big this approach will take a lot of time.
Efficient Approach
Now, if we notice XOR is maximum when corresponding bits are NOT SAME. This is the idea we are going to use to solve this problem.
We will check each bit of integer x and decide whether to set corresponding bit in our answer or not. So, at every bit we have two choices to make:
- We will want to set i th bit in our answer if this bit is reset in x. Now, the problem is we can overshoot our given range by setting i th bit or aur answer will be greater than r. So, we will set this bit if and only if setting this bit does not make our answer go beyond the given range.
- The other and the remaining case is when i th bit is set in x. So, we will try to reset it in our answer to maximise XOR. Now, resetting the bit in our answer may make it lesser than given range value l. But there is a way we can reset this bit and still bring our answer in range. We will check whether summing up all the remaining bits with our answer will make it greater than or equal to l. If yes, we will keep this bit reset else we must set it.
Note: Since we have to Maximise XOR, we will always start from most significant bit (not the sign bit).
Approach is given below:
1. For ith bit in x, check whether it is set or reset. 2. Depending upon the bit, set or reset corresponding bit of the answer. 3. Check whether this answer is going beyond the given range or falling short. 4. If yes, flip this bit in the answer. 5. Repeat from step 1 till the last or 0th bit.
PseudocodeThe pseudocode of given problem is as follows:
1. For i:- 30 to 0:
2. Check the ith bit of x.
3. Try and set this bit in answer such that it doesnt go beyond range.
4. Else try resetting this bit such that it either stays in range or will be in range if rest of the bits are summed up.
Complexity
Time Complexity: O(log X) for integer X
Space Complexity: O(log X) for integer X
Implementation
#include <iostream> using namespace std;
int main() { int q; cin >>q;
bool bits[31]; while(q--) { int x, l, r; cin >>x >>l >>r;
int temp = x;
//storing bit values in boolean array for(int i = 0; i < 31 ;i++) { if(temp & 1) bits[i] = true; else bits[i] = false;
temp = temp / 2; }
for(int i = 30; i >= 0 ;i--) { //here (1 << i) is 2i temp = 1 << i;
//if ith bit is set but resetting it in answer //will make our answer smaller than l if( bits[i] && (ans + temp - 1 ) < l ) ans += temp;
//if ith bit is not set and setting it in answer //will not make our answer go beyond r else if( !bits[i] && (ans + temp) <= r ) ans += temp; }
cout << (ans ^ x) <<"\n"; } }
|
https://iq.opengenus.org/maximise-xor-of-a-given-integer-with-a-number-from-the-given-range/
|
CC-MAIN-2021-21
|
refinedweb
| 687
| 76.76
|
You can use the Zendesk REST API to make backup copies of all the articles in your knowledge base. The backups can be useful in case you need to check or revert to a previous version of an article.
This tutorial covers the following tasks:
- What you need
- The plan
- Create the Python file
- Create folders for the backups
- Get all the articles in a language
- Paginate through the results
- Write the articles to files
- Create a backup log
- Code complete
- Restoring articles
You can back up a Help Center with only 34 lines of Python code. You can then restore any number of articles with a second, 27-line script.
What you need
You need a text editor and a command-line interface like the command prompt in Windows or the Terminal on the Mac. You'll also need Python 3.3 or earlier, download and install pip,. Don't enter it. goal is to back up all the articles in a specified language in your knowledge base. You want to be able to run the script as many times as you need to back up each language in your knowledge base at different times.
Here are the basic tasks the script must carry out to create the backups:
- Download the HTML of the articles from the knowledge base.
- Create an HTML file for each article in a folder on your hard drive.
- Create a backup log for easy reference later.
Backing up the images in the articles is outside the scope of this article. It might be covered in a future tutorial.
Create the Python file
Create a folder named backups where you want to download the backups.
In a text editor, create a file named make_backup.py and save it in your new backups folder.
In the editor, add the following lines to the file.
import requests credentials = 'your_zendesk_email', 'your_zendesk_password' zendesk = '' language = 'some_locale'
You start by importing the requests library, a third-party Python library for making HTTP requests. You should have installed it earlier. See What you need.
The credentials variable specifies your Zendesk Support sign-in email and password. Before running the script, replace the placeholders your_zendesk_email and your_zendesk_password with actual values. Example:
credentials = 'jane_doe@example.com', '3w00tfawn56'
For security reasons, only enter your password when you're ready to run the script. Delete it when you're done.
The zendesk variable identifies your Zendesk Support instance. The language variable specifies the language of the articles you want to back up. Replace the placeholder values with your own. Example:
zendesk = '' language = 'en-US'
See Language codes for supported languages for valid values for language.
Also, make sure to include 'https://' in your Zendesk Support url.
Create folders for the backups
In this section, you tell the script to automatically create a folder in your backups folder to store the backup. The folder will have the following structure to easily organize multiple backups in multiple languages:
/backups /2015-01-24 /en-US
Import the native os and datetime libraries at the top of the script:
import os import datetime
Add the following lines after the last line in the script:
date = datetime.date.today() backup_path = os.path.join(str(date), language) if not os.path.exists(backup_path): os.makedirs(backup_path)
The script gets today's date and uses it along with your language variable to build the new path. When the script runs, the backup_path might be something like 2015-01-24/en-US.
The script then checks the make sure the directory doesn't already exist (in case you ran the script earlier on the same day). If not, it creates the directory.
Your script so far should look like this:)
You can test this code. Make sure to specify a locale for the language variable (the credentials don't matter at this point), navigate to your backups folder with your command line, and run the script from the command line as follows:
$ python3 make_backup.py
A folder is created in the backups folder with the current date and the value of your language variable.
Get all the articles in a language
In this section, you send a request to the Help Center API to get all the articles in the language you specified. You'll use the following endpoint in the Articles API:
GET /api/v2/help_center/{locale}/articles.json
The endpoint is documented in this section of the API docs on developer.zendesk.com.
In the script, create the final endpoint url by adding the following statement after the last line in the script (don't use any line breaks):
endpoint = zendesk + '/api/v2/help_center/{locale}/articles.json'.format(locale=language.lower())
Before you can use the endpoint in a request, you need to prepend your Zendesk Support url to the string and specify a value for the
{locale}placeholder. The statement builds the final url from the Zendesk Support url you specified, the endpoint path in the docs, and the article language you specified. The value of your language variable is inserted (or interpolated) at the
{locale}placeholder in the string.
Because some locales listed in the language codes article have uppercase letters while the API expects lowercase letters, the value of the language variable is converted to lowercase to be on the safe side.
Using the example in this tutorial, the final endpoint url would be as follows:
''
Use the endpoint url to make the HTTP request and save the response from the API.
response = requests.get(endpoint, auth=credentials)
The statement uses the requests object's
get()method with the endpoint variable to make a GET request to the API. The method includes an argument named auth that specifies your basic authentication credentials.
Check the request for errors and exit if any are found:
if response.status_code != 200: print('Failed to retrieve articles and assign the response to a variable (no indent):
data = response.json()
The Zendesk REST API returns data formatted as JSON. The
json()method from the requests library decodes the data into a Python dictionary. data dictionary consists of one key named articles. Its value is a list of articles, as indicated by the square brackets. Each item in the list is a dictionary of article properties, as indicated by the curly braces.
Use your new knowledge of the data structure to check the results so far:
for article in data['articles']: print(article['id'])
The snippet iterates through all the articles in your data dictionary and prints the id of each article. This is only temporary code for testing. You could print the article body with
article['body'], but scanning that much HTML in your console could be a pain. We'll delete the print statement after we're done testing.
Your script so far should look as follows:) endpoint = zendesk + '/api/v2/help_center/{locale}/articles.json'.format(locale=language.lower()) response = requests.get(endpoint, auth=credentials) if response.status_code != 200: print('Failed to retrieve articles with error {}'.format(response.status_code)) exit() data = response.json() for article in data['articles']: print(article['id'])
Replace all the placeholders with actual values and run the script again from the command line:
$ python3 make_backup.py
You should get a list of up to 30 article ids confirming that the articles were retrieved successfully. You won't see more than 30 articles even if you have more because the API limits the number to prevent bandwidth and memory issues. In the next section, you change the script to paginate through all the results.
Paginate through the results
In this section, you paginate through the article results to see all the articles. The JSON returned by the endpoint may only contain a maximum of 30 records, but it also contains a
next_page property with the endpoint URL of the next page of results, if any. Example:
"next_page": "", ...
If there's no next page, the value is null:
"next_page": null, ...
Your code will check the
next_page property. If not null, it'll make another request using the specified URL. If null, it'll stop. To learn more, see Paginating through lists.
Insert the following line (in bold) after the endpoint variable declaration:
endpoint = zendesk + '/api/v2/help_center/{locale}/articles.json'.format(locale=language.lower()) while endpoint:
Indent all the lines that follow the
whilestatement.
while endpoint: response = requests.get(endpoint, auth=credentials) if response.status_code != 200: print('Failed to retrieve articles with error {}'.format(response.status_code)) exit() data = response.json() for article in data['articles']: print(article['id'])
Add the following statement as the last line and indent it too:
endpoint = data['next_page']
This sets up a loop to paginate through the results. While the endpoint variable is true -- in other words, while it contains a url -- a request is made. After getting and displaying a page of results, the script assigns the value of the
next_pageproperty to the endpoint variable. If the value is still a url, the loop runs again. If the value is null, such as when the API returns the last page of results, the loop stops.
Your modified code should look as follows:
while endpoint: response = requests.get(endpoint, auth=credentials) if response.status_code != 200: print('Failed to retrieve articles with error {}'.format(response.status_code)) exit() data = response.json() for article in data['articles']: print(article['id']) endpoint = data['next_page']
Run the script again from the command line:
$ python3 make_backup.py
You should get a list of all the articles in the language in your knowledge base.
The next step is to make copies of the articles on your computer.
Write the articles to files
In this section, you create HTML files of all the articles in your knowledge base.
The twist here is that the
body attribute of an article only contains the HTML of the body, as its name suggests. The article's title isn't included. The title is specified by another attribute named
title. You'll add the title to the article's HTML before writing the file.
Replace the following test line:
print(article['id'])
with the following lines:
if article['body'] is None: continue title = '<h1>' + article['title'] + '</h1>' filename = '{id}.html'.format(id=article['id']) with open(os.path.join(backup_path, filename), mode='w', encoding='utf-8') as f: f.write(title + '\n' + article['body']) print('{id} copied!'.format(id=article['id']))
Make sure to indent them at the same level as the print statement. The lines perform the following tasks:
- Skips any blank articles
- Creates an H1 tag with the article title
- Creates a file name based on the article ID to guarantee unique names
- Creates a file in the folder the script created earlier using the backup_path variable
- Combines the title, a line break, and the article body in one string
- Writes the string to the file
- Prints a message to the console so you can track the progress of the backup operation.
Your modified code should look as follows:
for article in data['articles']: if article['body'] is None: continue title = '<h1>' + article['title'] + '</h1>' filename = '{id}.html'.format(id=article['id']) with open(os.path.join(backup_path, filename), mode='w', encoding='utf-8') as f: f.write(title + '\n' + article['body']) print('{id} copied!'.format(id=article['id']))
If the article body is blank, the
continue statement on the third line skips the rest of the steps in the
for loop and moves to the next article in the list. The logic prevents the script from printing any empty drafts in your Help Center that might be acting as placeholders for future content. It also prevents the script from breaking when you try to concatenate a string type with a Python 'NoneType' in the snippet's next-to-last line (
title + '\n' + article['body']).
Run the script again from the command line:
$ python3 make_backup.py
The script writes all the articles in your knowledge to your language folder. Open a few files in a text editor to check the HTML.
Create a backup log
In this section, you create a backup log for easier reference later. The log will consist of a csv file with File, Title, and Author ID columns and a row for each article that's backed up.
Import the native csv library at the top of the script:
import csv
Create the following log variable (in bold) just before the first endpoint variable declaration:
log = []
endpoint = zendesk + '/api/v2/help_center/ ... ...
The variable declares an empty list. After writing each article to file, the script will update the list with information about the article.
Add the following
log.append()statement (in bold) immediately following and at the same indent level as the print statement:
print('{id} copied!'.format(id=article['id']))
log.append((filename, article['title'], article['author_id']))
After writing an article, the script appends a data item to the log list. The double parentheses are intended. You're appending a Python tuple, a kind of list that uses parentheses. The csv library uses tuples to add rows to a spreadsheet. Each row consists of a filename, title, and author id.
Add the following lines at the bottom of the script. The first line should be flush to the margin (no indent and no wrap):
with open(os.path.join(backup_path, '_log.csv'), mode='wt', encoding='utf-8') as f: writer = csv.writer(f) writer.writerow( ('File', 'Title', 'Author ID') ) for article in log: writer.writerow(article)
After writing all the articles, the script creates a file called _log.csv. The underscore ensures the file appears first in any file browser. The script adds a header row and then a row for each article in the log list.
Code complete
Your completed script should look like as follows. A copy of the script is also attached to this article.
Use the command line to navigate to your backups folder and run the script:
$ python3 make_backup.py
The script makes a backup of your knowledge base in a language folder. It also creates a log file that you can use in a spreadsheet application.
Restoring articles
You can restore any backed up article with a second script that reads the content of each file, parses it into an HTML tree to extract the title and body for Help Center, and uses the API to update the article in Help Center.
The script in this section updates existing articles; it doesn't create new ones. To create, it would need to be modified to use a different endpoint, as well as to specify a section and author for the article.
You'll need version 2.4.2 or greater of the requests library. To check your version, run
$ pip show requests at the command line. To upgrade, run
$ pip install requests --upgrade.
If you don't already have Beautiful Soup, you'll need to install it. Beautiful Soup is a Python library for parsing, navigating, searching, and modifying HTML trees. To install Beautiful Soup:
At the command line, enter:
$ pip install beautifulsoup4
The command downloads and installs the latest version of Beautiful Soup.
Install lxml, an HTML parser that works with Beautiful Soup:
$ pip install lxml
Beautiful Soup works with a number of parsers. The lxml parser is one of the fastest.
To restore selected articles:
Copy the following script in a new text file, name it restore_articles.py, and save it in your backups folder with your make_backup.py file.
Replace the placeholder values in the Settings section with your own:
- credentials - Your Zendesk Support sign-in email and password. A security best practice is to enter these only before running the script, and then deleting them after. Example:
credentials = 'jtiller@example.com', 'pasSw0rd0325'
- zendesk - Your Zendesk Support instance. Make sure to include 'https:\'. Example:
zendesk = ''
- backup_folder - A folder name created by the backup script. Example:
backup_folder = '2017-01-04'
- language - A locale corresponding to a subfolder in your backup folder. Example:
language = 'en-us'
- restore_list - An array of article ids. Example:
restore_list = [200459576, 201995096].
Use the command line to navigate to your backups folder and run the script:
$ python3 restore_articles.py
Merci, Charles. This is fantastic :)
Thanks, i'll give this one a go. I've been using a Ruby script for the last year or so, but this one looks like it is better.
Cheers
Thanks, this is really helpful.
Just in case, did anyone write a restore script ? This would be really nice to translate the knowledge base.
I'd been thinking about how to "back up" our production knowledgebase to our sandbox.
Once the backup is complete as per this method, could we restore to our sandbox?
Also, +1 to @Roland's question about a restore script.
This is awesome. Thank you so much
From what I can tell (on this page as well as on the API definition page for Knowledge Base), you cannot pull the "last udpated by" value of the KB. Does anybody know a way to do this?
@Adam Goolie Gould
Hey Adam!
"Technically" you could apply your backup from your production knowledge base to your Sandbox. However, the benefit of doing so may not necessarily be worth the added steps. This is due to the Sandbox environment being completely separate from your production instance of Zendesk. What this means is that, you can more easily restore your backup to your production environment (if/when needed) from presumably the same files that you saved on the machine that you performed the initial backup.
So, yes you could do this, I just caution against doing so in effort of implementing some sort of "synced" redundancy as that is not the case with the Sandbox environment.
I hope this information helps!
Cheers,
Fred Thomas | Customer Advocate
@Roland, you'll find scripts and instructions on restoring html files back from localization here:
It's part of a larger article on using the API to automate the first loc handoff.
This is awesome, but this is CRAZY that there is not a way to backup or restore content inside the app. Wrote about that here -
Is there any way, via perhaps the API, to also get the images downloaded as well?
Hi Russur,
There's no image API, but once you've downloaded the articles on your system, a number of Python libraries and techniques can let you read the image URLs in the files and make requests to download them. I like BeautifulSoup for parsing HTML, and Requests to make HTTP requests. You can do a Google search for other options.
As for me, I'd write a script that opened each file and used BeautifulSoup to get the image urls:
Then I'd grab the src attribute in each img tag and use it to make a request for the image file from the server using the Requests library:
Note: I'm checking to make sure the first 4 characters start with http so it's a valid request url.
At this point, this image is in memory on my system. Next, I'd grab the filename from the src attribute and write it to file:
One thing to be careful about: Most web servers only allow browsers to download images. So I'd set a header so my request looked like it's coming from a browser:
Hope this helps.
Also, is there any way to get the Section names, that contain the html files?
Thanks
That did great! Here is my hacked code that i added. (This goes about the line:
endpoint = data['next_page']
# begin included code to search and pull out images
tree=BeautifulSoup(article['body'], "html.parser")
images = tree.find_all('img')
for image in images:
src = image['src']
if src[:4] != 'http': continue
response = session.get(src, stream=True)
file_name = src.split('/')[-1]
image_dir = src.split('/')[-2]
file_name = str(article['id']) + '_' + image_dir + '_' + file_name
with open(os.path.join(backup_path, file_name), mode='wb') as f:
for chunk in response.iter_content():
f.write(chunk)
# End of included code
Also added the
from bs4 import BeautifulSoup
towards the top also.
This will work to get the graphic as well as the directory name that Zendesk created for the image. I will probably update this to get the Section ID Name, and maybe recreate the directory structure. Thanks!
Is there a way to get the Title of the Section, that the article is contained in? I was trying to get the Section API call to work, but having no luck?
Hi Russur,
You could sideload the sections with your articles in the API call (the bit in bold in the following example):
endpoint = zendesk + '/api/v2/help_center/{locale}/articles.json?include=sections'.format(locale=language.lower())
Then you can associate the section_id in each article record with a section record, which will contain the section title.
For more info on how sideloading works, see
For a tutorial that covers sideloading along the way and a technique to associate records, see Getting large data sets, especially the section on sideloading.
Every time I try to run the make_backup.py python3 script, I keep getting the following:
Failed to retrieve articles with error 401
Hi David, a 401 points to a problem with the authentication credentials. Can you double-check the Zendesk email and password you entered on line 7 under "Code complete" above?
The other thing to check is to see if your Zendesk is configured to allow passwords for API requests. In the admin interface, click the Admin button (gear icon) in the lower-left, then Channels > API. At the bottom of the page there should be a checkbox to enable password access.
The enable password access checkbox is checked. My credentials are fine as I can get into Zendesk and manage articles. API is not liking something. We use gmail accounts for authentication into zendesk. Could this be the problem?
That could be the problem. Because you're authenticated with a Google password, your Zendesk profile might not have a Zendesk password. If you're an admin, you should be able to add one yourself. See Resetting user passwords. Use the Set option instead of the Reset one.
That did the trick. Everything seems to be working fine now. Thanks.
OK, I got all articles exported fine. The articles contains all of the images and video attachments on them. I am now playing with the idea having to restore a deleted article using curl. I am using the following syntax:
curl{id}/articles.json \
-d '{"article": {"title": "How to take pictures in low light", "body": "Use a tripod", "locale": "en-us" }}' \
-v -u {email_address}:{password} -X POST -H "Content-Type: application/json"
I can see {id} is the section id to which the article belong to.
title is the actual title of the article being imported.
body - do not know what that is or how to obtain that information.
locale - language source
password - needed for authentication.
I successfully created the article, but do not see the body of the article.
Hi David, I'm not sure I understand the question. The `body` attribute specifies the content of the article. It's probably going to be one long JSON string, so with curl it's probably easier to import it in the curl statement. See Move JSON data to a file in our curl article.
When I run the make_backup.py program, I get a bunch of html files which are the articles. I open one of the html filea and I see the content, including any attachments. How do I import these html file(s) when needed to do so? Is there a way via curl command that will allow it. I have all the information I need like title, section id the article belongs to, the actual html file to be imported, the article is, and its position. So there is a scripted way to get the articles out, but is there a scripted way to get them back in?
Ah, I see. You'll need a script to parse the content of each HMTL file, convert it to JSON, and post the data to HC. cURL is probably not the most efficient tool for this if you have more than a handful of articles.
In Python, you can use the BeautifulSoup library to parse the content. One technique is described in Add the article translations, which is part of a larger tutorial on publishing localized articles on Help Center.
I would be using the curl command on a per document basis, unless you know of an easier way of doing this. After I run the make_backup.py python script, I have a log file containing a lot of information like title, section id the article belongs to, the html filename of the article, and its position. I also have all of the articles in html format. I open these html files and see the article, its images, and video attachments. Are you saying the article (html file) has to be converted to json in order to be uploaded to HC? The script above downloaed the articles perfectly. I there a similar script, or curl command to allow me to upload a document that has been deleted. I would have to locate the html file that represents the deleted article based on the title. Once the html file has been identified as the document to be uploaded, I would think curl or python script would do the trick.
Hi David,
The backup script doesn't actually download the HTML files. It downloads JSON data, then decodes and writes the data to files (lines 27 to 33 in the "Code complete" sample above).
The API uses JSON is its data exchange format. The process of uploading articles to Help Center is the reverse of downloading the articles. Each article has to be converted to JSON and then sent to the API in that format. After the API receives the data, the JSON is decoded and published.
In the cURL example you gave above, the "body" element is JSON (as is the entire "-d" line):
For more on encoding and decoding JSON, see the article Working with JSON.
Hope this helps.
The code for backing up all of the Zendesk articles is great. Is Zendesk working on a python script to import the html files, convert them to json, and then publish them? I have found that you can open the html file via text editor and copy all of the tags that make up the body (content). Create new article in Zendesk, choose section where the article is to be placed, give it the same title as before, click the <|> source code button and past what you copied from the html file earlier, the article shows up fine with images and any embedded videos. Easy. Just wanted to see if Zendesk was going to create a python script to import the backed up files generated from the Zendesk make_backup.py script. Thanks.
Thanks for this! I was able to backup the majority of articles.
Apart from the articles which are publicly available, we also have some sections which are only available for logged in users. These articles don't seem to be included in the backup. Is there any way to do this?
Hi Jasper,
The articles returned by the API depend on the user role of the person making the API request. The API returns only the articles that the requesting agent, end user, or anonymous user can normally view in Help Center when using the web UI. To back up a Help Center with restricted content, you should ideally have the user role of Help Center manager to get all the content.
Administrators are Help Center managers by default. You can add Help Center managers by giving agents Help Center manager privileges. See Understanding Help Center roles and setting permissions.
Let me know if that's not the problem.
Charles
Hi Charles,
I already have the role of Administrator, so that's probably not the problem.
Is there anything else I can do to solve this?
Please sign in to leave a comment.
|
https://help.zendesk.com/hc/en-us/articles/229136947-Backing-up-your-knowledge-base-with-the-Zendesk-API
|
CC-MAIN-2017-39
|
refinedweb
| 4,680
| 65.01
|
#include "LPD8806.h"#include "SPI.h"// Example to control LPD8806-based RGB LED Modules in a strip/*****************************************************************************/// Choose which 2 pins you will use for output.// Can be any valid output pins.int dataPin = 2; int clockPin = 3; // Set the first variable to the NUMBER of pixels. 32 = 32 pixels in a row// The LED strips are 32 LEDs per meter but you can extend/cut the stripLPD8806 strip = LPD8806(32, dataPin, clockPin);// you can also use hardware SPI, for ultra fast writes by leaving out the// data and clock pin arguments. This will 'fix' the pins to the following:// on Arduino 168/328 thats data = 11, and clock = pin 13// on Megas thats data = 51, and clock = 52 //LPD8806 strip = LPD8806(32);void setup() { // Start up the LED strip strip.begin(); // Update the strip, to start they are all 'off' strip.show();}void loop() { // fill the entire strip with... colorWipe(strip.Color(127,0,0), 10); // red colorWipe(strip.Color(0, 127,0), 10); // green colorWipe(strip.Color(0,0,127), 10); // blue}\// fill the dots one after the other with said color// good for testing purposesvoid colorWipe(uint32_t c, uint8_t wait) { int i; for (i=0; i < strip.numPixels(); i++) { strip.setPixelColor(i, c); strip.show(); delay(wait); }}
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=135614.0;prev_next=next
|
CC-MAIN-2016-30
|
refinedweb
| 245
| 74.08
|
hi!
What sql librries are for c++? what is the most popular?
If you are using Visual C++, there are a lot of libraries available.
Oracle is probably the most popular.
If you're using Visual Studio, you would add a reference to System.Data.OracleClient.
...then (in your code) add
using namespace System::Data::OracleClient;
If you want to support any sql-compliant database then use ODBC, which is the oldest and most common way to access them. There are a few free c++ odbc wrapper classes, just use google and you will easily find them. Also you might want to read an odbc tutorial, which again can easily be found with google.
thanks for the answers ...
|
https://www.daniweb.com/programming/software-development/threads/368861/sql-and-c
|
CC-MAIN-2018-09
|
refinedweb
| 118
| 67.35
|
(DONE see
record).
-.) (DONE There is a
errorHookand
resultHooknow).
- (DONE in 1.5*)*
- line numbers (be able to turn on / off)
- be able to specify some .jar files that will be included into the classpath (DONE in 1.6)
- enable standard-de-facto ctrl+z for undo and ctrl+y for redo
- tabs
- code formatting
Debug features:
- step-by-step execution of statements
- improve variables insight for clarity
5 Comments
Daniel Serodio
I think the ideal scripting console is IPython (interactive Python), I think we should imitate it as much as possible.
rdeman
It would be so great if the Groovy Console could run 1 single session, just like the beanshell console. It really works as an interpreter. You can keep all your variables during the same session and don't have to pres "run" to re-execute the whole script from scratch each time.
Dave Fuller
I think it would be best if the shell could take a script name as an argument (i.e., "grails shell <scriptname>") and have the shell execute the script and exit, dumping output to stdout. This would be an easy way to perform administrative tasks on the datastore within the context of the application. It would also allow these tasks to be run in an unattended manner.
Bradley Slavik
I cannot find where to place bug report for GroovyConsole. This is minor, but I was going through Barclay and Savage book, and one of the questions (Chapter 2, 3b) asked if a$ was a valid Groovy identifier. I thought not. but GroovyConsole lets it in.
Here try it:
def a$ = 7
println (a$ + 4)
Sorry. I looked around for maybe 30 minutes and could not find where to register GroovyConsole bugs. Maybe that is because it is almost perfect?
Paul King
Hi Bradley, you enter bugs for GroovyConsole in the standard Groovy Jira:
Just select the component type to be Groovy Console.
What is the bug by the way?
|
http://docs.codehaus.org/display/GROOVY/New+Groovy+Console+Wish+List?focusedCommentId=60850200
|
CC-MAIN-2014-35
|
refinedweb
| 326
| 73.88
|
Introduction
There are two ways to read data from a SQL database in to GraphLab Create:
DBAPI2 support is a new feature and currently released as beta, but we strongly encourage you to try it first. The ease of getting started with DBAPI2 far surpasses using ODBC.
DBAPI2 Integration
DBAPI2 is a standard written to encourage database providers to expose a common interface for executing SQL queries when making Python modules for their database. Common usage of a DBAPI2-compliant module from Python looks something like this:() c.execute("SELECT * FROM stocks") results = c.fetchall()
(example adapted from here)
SFrame offers a DBAPI2 integration that enables you to read and write SQL data in a similar, concise fashion. Using the connection object in the previous example, here
is how you would read the data as an SFrame using the
from_sql method:
import graphlab as gl stocks_sf = gl.SFrame.from_sql(conn, "SELECT * FROM stocks")
If you would like to then write this table to the database, that's easy too,
using the
to_sql method.
to_sql simply attempts to append to an already
existing table, so if you intend to write the data to a new table in your
database, then you must use the "CREATE TABLE" syntax, including the type
syntax supported by your database. Here's an example of creating a new table
and then appending more data to the table.
import datetime as dt c = conn.cursor() c.execute('''CREATE TABLE more_stocks (date text, trans text, symbol text, qty real, price real)''') c.commit() stocks_sf.to_sql(conn, "more_stocks") # Append another row another_row = gl.SFrame({'date':[dt.datetime(2006, 3, 28)], 'trans':['BUY'], 'symbol':['IBM'], 'qty':[1000], 'price':[45.00]}) another_row.to_sql(conn, "more_stocks")
That is all there is to know to get started using SFrames with Python DBAPI2
modules! For more details you can consult the API documentation of
from_sql
and
to_sql.
Currently, we have tested our DBAPI2 support with these modules:
This means that our DBAPI2 support may or may not work on other modules claiming to be DBAPI2-compliant. We will be adding more modules to this list as driven by what our users are interested in, so if you are interested in other modules, please try them out and let us know! If there is an issue with using one, please file an issue on our GitHub page and include the error output you received and/or some small code sample that exhibits the error. You can even submit a pull request if you are able to fix the issue.
If your database does not support a DBAPI2 python module, but does support an ODBC driver, keep reading.
ODBC Integration
ODBC stands for "Open Database Connectivity". It is an old standard (first version was released in 1992) that provides a language-agnostic interface for programs to access data in SQL databases. There are a few extra steps to set it up and extra concepts to learn before you start using it, but it remains one of the most universal ways to communicate with SQL databases. The ODBC connector included in SFrame only supports Linux and OS X. Windows is not supported at this time.
ODBC Overview
ODBC provides maximum portability by requiring the database vendor to write a driver that implements a common SQL-based interface. One or more of these drivers are managed by a system-wide ODBC driver manager. This means that in order to use ODBC, you must first install an ODBC driver manager, and then find your database's ODBC driver, download it, and install it into the driver manager. The database itself need not be installed on your computer; it can be installed on a remote machine. It is very important to make sure your ODBC driver works with your database before trying GraphLab Create's ODBC functions to make sure you are debugging the correct problem. The next section will help you do this.
Setting Up An ODBC Environment
The only ODBC driver manager we officially support is unixODBC. You are welcome to try others if you really want to, but we do not guarantee that this will work. If you are so bold, let us know what happened!
Execute this command to install unixODBC on Ubuntu:
sudo apt-get install unixodbc
this on CentOS 6:
sudo yum install unixODBC.x86_64
and this on OS X (if you use Homebrew):
brew install unixodbc
Once you have this installed, try executing this command:
odbcinst -j
[SQLite] Description=SQLite ODBC Driver ; Replace with your own path Driver=/path/to/lib/libsqliteodbc.so Setup=/path/to/lib/libsqliteodbc.so UsageCount=1 [myodbc] Description = mySQL ODBC driver Driver = /path/to/lib/libmyodbc.so Setup = /path/to/lib/libodbcmyS.so Debug = 0 CommLog = 1 UsageCount = 1
Setting Up A Data Source
ODBC has the concept of a "data source" (or DSN for "data source name"), which corresponds to a specific database. For example, if you have mySQL installed on your system, you'll need to create a data source to point to a specific database within that system. To do this, you must add an entry to the file that is responsible for either "SYSTEM DATA SOURCES" or "USER DATA SOURCES" from the output of your "odbcinst -j" command. Here is an example of how you could set up a SQLite DSN, adapted from here:
[sqlite_dsn_name] Description=My SQLite test database ; corresponds to above driver installation entry Driver=SQLite Database=/home/johndoe/databases/mytest.db ; optional lock timeout in milliseconds Timeout=2000
and an example for mySQL, adapted from here:
[mysql_dsn_name] Description=myodbc ; Assumes your driver is installed with the name "myodbc" Driver=myodbc ; Name of the database you want to connect to within mySQL Database=test Server=localhost Port=3306
It's a great idea to test that all of this works before unleashing GraphLab Create on your database. UnixODBC comes with a command line utility called isql that will access your database through ODBC. This is not a very full-featured command line tool, so we only recommend using it for testing if your ODBC setup works. Invoke it like this to access our example mySQL DSN:
isql mysql_dsn_name username password
If you are able to do something simple to your database, then feel free to move on to GraphLab Create's ODBC functions. Just so you don't have to use your brain, here's what your isql output should roughly look like when you do something simple:
$ isql mysql_dsn_name myusername mypassword +---------------------------------------+ | Connected! | | | | sql-statement | | help [tablename] | | quit | | | +---------------------------------------+ SQL> CREATE TABLE foo (a INTEGER, b INTEGER) SQLRowCount returns 1 SQL> INSERT INTO foo VALUES(1, 2) SQLRowCount returns 1 SQL> SELECT * FROM foo +-----------+-----------+ | a | b | +-----------+-----------+ | 1 | 2 | +-----------+-----------+ SQLRowCount returns 0 1 rows fetched
If you aren't able to do something like the above, make sure to read the documentation of your specific ODBC driver to see if you missed any part of setup. Since there are so many drivers, we can't possibly test them all and document their many intricacies.
Example: Step-by-Step Instructions for MySQL on OSX
1. Install the ODBC Driver Manager, unixodbc
brew install unixodbc
Note: If you do not have Homebrew installed on OSX, see installation instructions here.
2. Confirm ODBC Driver Manager Installation and Configuration Settings
odbcinst -j
Sample output:
rajat@fourier ~> odbcinst -j unixODBC 2.3.2 DRIVERS............: /usr/local/Cellar/unixodbc/2.3.2_1/etc/odbcinst.ini SYSTEM DATA SOURCES: /usr/local/Cellar/unixodbc/2.3.2_1/etc/odbc.ini FILE DATA SOURCES..: /usr/local/Cellar/unixodbc/2.3.2_1/etc/ODBCDataSources USER DATA SOURCES..: /Users/rajat/.odbc.ini SQLULEN Size.......: 8 SQLLEN Size........: 8 SQLSETPOSIROW Size.: 8
This command will also let you know where the .ini files for the drivers and data sources need to be created.
From this, the Drivers go here:
/usr/local/Cellar/unixodbc/2.3.2_1/etc/odbcinst.ini
and User Data Sources (DSN definitions for databases) go here:
/Users/rajat/.odbc.ini.
3. Install MySQL ODBC Driver for Mac
brew install mysql-connector-odbc
This will install in
/usr/local/Cellar/mysql-connector-odbc
4. Find Installed MySQL Driver
We need to find the .so file for the actual driver (so it can be registered with the ODBC Driver Manager), for this installation it is here:
rajat@fourier ~> ll /usr/local/Cellar/mysql-connector-odbc/5.3.2_1/lib total 14152 -r--r--r-- 1 rajat admin 3623032 Dec 18 12:14 libmyodbc5a.so -r--r--r-- 1 rajat admin 3619008 Dec 18 12:14 libmyodbc5w.so
We want
libmyodbc5w.so so we can support Unicode.
Now that we know this, we need to register this driver with the ODBC Driver Manager by manually creating the .ini file for the driver in the location we learned earlier from
odbcinst -j.
5. Register driver with ODBC Driver Manager
Manually create the entry for the driver in the DRIVERS .ini mentioned from
odbcinst -j, here is what it should look like:
rajat@fourier ~> cat /usr/local/Cellar/unixodbc/2.3.2_1/etc/odbcinst.ini
[myodbc] Description = MySQL ODBC Driver Driver = /usr/local/Cellar/mysql-connector-odbc/5.3.2_1/lib/libmyodbc5w.so Setup = /usr/local/Cellar/mysql-connector-odbc/5.3.2_1/lib/libmyodbc5w.so Debug = 0 CommLog = 1 UsageCount = 1
6. Create the Database definition as a DSN
Now we need to create the DSN definition in the USER DATA SOURCES location returned by
odbcinst -j, from this output we need to edit
/Users/rajat/.odbc.ini. Notice that the Driver field below refers to the name of the section added in the previous step (myodbc).
Remember to update the Database, Server, and Port fields appropriately for your machine.
[mysqltest] Description=myodbc Driver=myodbc Database=test Server=localhost Port=3306
7. Done! Test from GraphLab Create
import graphlab conn = graphlab.connect_odbc('DSN=mysqltest;UID=root;PWD=foo') print conn.dbms_name 'MySQL'
Other Resources
Here are some posts that we found to be helpful when testing specific database drivers. Don't follow them blindly, but take them in context. This is just to save you a bit of googling if you're stuck setting up your ODBC environment. When in doubt, always rely on the driver's official documentation.
- SAP HANA DB
- PostgreSQL
- Microsoft SQL Server
- SQLite
- Various drivers
- Help with forming connection strings
Using ODBC Within GraphLab Create
Adding a DSN makes forming your connection string much easier. An ODBC connection string is similar to the database connection strings you may be familiar with, but slightly different. For our running mySQL example, this would be the connection string:
'DSN=mysql_dsn_name;UID=myusername;PWD=mypassword'
Therefore, to connect to this database through GraphLab Create, you would execute this in Python:
import graphlab as gl db = gl.connect_odbc('DSN=mysql_dsn_name;UID=myusername;PWD=mypassword')
As long as you did not receive an error message in that last step, you can read the result of any SQL query like so:
sf = gl.SFrame.from_odbc(db, "SELECT * FROM foo")
Now feel free to use your SFrame as you please! If you would like to write an SFrame to a table in your database, we support creating a new table, and appending to an existing table. Both can be achieved through the same function call:
sf.to_odbc(db, 'a_table_name')
If the table name is found to exist in your database,
to_odbc will attempt to
append each row to the table it finds. There is nothing sophisticated about
this, as
to_odbc does not do any sort of type checking or column matching in this case.
If the table name is not found,
to_odbc will use a heuristic to pick the best
type specific to your database for each column of the SFrame and create the table.
If you find yourself needing do execute arbitrary SQL commands to prepare your environment for a query (and which may not return results), you can call
execute_query on your database connection object:
db.execute_query("SET SCHEMA foo_schema")
Notes
We do not support writing all types that are possible to hold in an SFrame, namely list, dict, or image types. This is because there is no clean mapping to an ODBC type.
We support reading all ODBC types except time intervals. Reading SQL time intervals may work for certain drivers, but your mileage may vary so we are not officially supporting it at this time. Also, SFrames do not support timestamps that use fractions of seconds, so the fraction portion of a timestamp will be ignored when reading.
|
https://turi.com/learn/userguide/data_formats_and_sources/sql_integration.html
|
CC-MAIN-2017-09
|
refinedweb
| 2,081
| 53.1
|
I am generating HTML from a fairly complex transform made of many modules. These modules declare a number of namespaces that apply to different function packages, module-specific named templates and modes, etc., but *not* to either the input elements or result elements (other than possible an XHTML namespace). I have checked and all of these namespaces are accounted for in the exclude-result-prefixes declarations for all the modules involved.
In my transform I'm calling this template, which then applies the following template:
<xsl:template <xsl:apply-templates </xsl:template>
<xsl:template <meta name="brand" content="XXXX"/> <xsl:value-of </xsl:template>
The resulting element has an unwanted namespace declaration:
<meta xmlns:</meta>
The namespace in question is associated with functions and nothing else. The namespace is only declared on <xsl:stylesheet> elements and the prefix is not used on any start tag.
I can't think of anything that would cause this namespace, and only this namespace (out of the several that are declared on each stylesheet), to be output in the HTML.
What could cause this sort of rogue namespace declaration?
|
http://www.oxygenxml.com/archives/xsl-list/201503/msg00013.html
|
CC-MAIN-2018-05
|
refinedweb
| 185
| 50.16
|
This is the fourth article in our Azure series. So far we have looked at deploying a simple Node.js application to the Azure platform, using Pino and Event Hubs to collect logs, and using Stream Analytics to gain real-time business intelligence.
In this post we are taking advantage of the new Web App for Containers service offered by Azure to deploy a containerized web application to the Azure cloud. We also take a look at deploying our own container registry on Azure and using webhooks to link up a basic continuous deployment pipeline. The figure below shows the deployment architecture we are aiming for.
A containerized deployment architecture using Microsoft Azure Web App for Containers.
Requirements
Note: We go into more detail about getting started with Azure in the first post in this series. If you have worked through that post, you will have already laid a lot of the groundwork for what follows. Otherwise, you will just need to follow these instructions.
To start with, you need a Microsoft account to access Azure. Your company administrator can create a login for you, but if you just want to just try out Azure you can sign up as an individual for a free trial at azure.microsoft.com/en-us/free/.
With an account, you can set up, view and configure Azure products either via the web-based portal or via Azure CLI. We use the Azure CLI for this exercise. CLI installation instructions are available for macOS, Linux and Windows.
The Azure portal allows you to interact with, and create, Azure resources through a graphical user interface (GUI).
Finally, we need an application to deploy. In the first post of this series, we discussed a sample application architecture using Hapi and Typescript, and we continue to use this as our canary application. You can drop in any Node.js or other application you like. Grab a copy of the sample application using git by running the following command:
git clone git@github.com:nearform/azure-typescript-hapi.git
A Proof-Of-Concept Deployment
Before we do a deep dive, lets walk through a proof-of-concept deployment of a basic NGINX web server Docker image from Docker Hub. This lets us get used to the setup, whilst isolating any potential problems before trying a more complex setup.
Log in to the Azure CLI using the following command, substituting your own login details:
az login -u <username> -p <password>
Setting up a deployment
Before you deploy an application, there are various organisational and administrative constructs to create. Compared with some other cloud providers this does feel a bit like additional overhead, but there is a case to be made for making explicit setup choices. Undoubtedly the structure is beneficial for more complex deployments. Fortunately, if you have followed through previous blog posts, or already work with Azure you already have most of these in place.
1. Create a Deployment User
This is separate to your Microsoft account and can be shared with members of your team, CI bots and so on. Check if you already have one registered using the CLI with the following command:
az webapp deployment user show
Otherwise, create one by running:
az webapp deployment user set --user-name <username> --password <password>
2. Create a Resource Group
Create (or choose, if you already have one) a resource group. A resource group is a logical grouping of resources associated with a geographical region. You can find more information on the significance of resource groups here. Choose a name and location for your resource group (in this example we use West Europe).
List existing resource groups with the following command:
az group list
If there is not one already created, or you want to create a new one, run the following command:
az group create --name <resource-group-name> --location "West Europe"
Tip: You can also register your resource group (as well as other settings encountered later) as the default resource group in the CLI using:
az configure --defaults group=<resource-group-name>
This saves you having to specify the group for lots of CLI commands. In our examples below we do not assume any defaults are set and we explicitly specify all settings.
3. Create a Service Plan for your Resource Group
Create (or re-use) a service plan associated with this resource group. Service plans define the size and features, and therefore cost, of the server farm provisioned. In this example we use a Standard level plan (
--sku S1). Create a service plan by running the following command:
az appservice plan create --name <service-plan-name> --resource-group <resource-group-name> --sku S1 --is-linux
Note: Here we use the
is-linux flag. We must have a Linux service plan to use containers and so we have to use at least a Basic level service plan.
Deploy a Containerized Application
Now that all of the infrastructure is in place, lets go ahead and deploy our application. Here we are going to use a simple hello world Docker image from Kitematic (kitematic/hello-world-nginx) which uses NGINX to serve a static site.
Run the following command to deploy your application:
az webapp create --resource-group <resource-group-name> --plan <service-plan-name> --name <app-name> --deployment-container-image-name kitematic/hello-world-nginx
Now run the
browse command and the CLI opens a browser window showing our freshly minted application.
az webapp browse --resource-group <resource-group-name> --name <app-name>
You should see something like the following:
Tip: If you do not see the kitematic hello world page, try checking the logs for your webapp. Use the following command to view logs in your terminal:
az webapp log tail --name <app-name> --resource-group <resource-group-name>
Congratulations! We have successfully deployed a containerized app to Azure Web Apps.
Whilst this demonstrates a proof of concept, it is likely that you will have at least some container images that you do not want to host publicly on Docker Hub. Fortunately, Azure integrates with any Docker container registry. This allows you to choose where you store your container images, for example in a private cloud or on a dedicated CI/CD server. In the next section we take a look at setting up our own private container registry using Azure Container Registry.
Creating a registry with Azure Container Registry
It is very quick to set up a private Docker registry with Azure Container Registry. We use the service level Managed_Basic as we want to use the webhook support this provides. To create a registry, choose a name and a resource group to associate it with. You can use the same resource group that we used in the previous section. A registry requires a storage account, but one will be created when you create a registry if you do not already have one.
Run the following command to create a registry:
az acr create --name <container-registry-name> --resource-group <resource-group-name> --sku Managed_Basic
Azure automatically creates an admin account in a disabled state for your repository. Run the following command to enable it:
az acr update -n <container-registry-name> --admin-enabled true
You can now access registry credentials with the command:
az acr credential show -n <container-registry-name>
Now that your registry is set up, check its contents with the following command:
az acr repository list -n <container-registry-name>
At this stage you see an empty list, as we have not yet put any images into our registry. Our next task is to create a Docker image, and push it up to our registry, ready to deploy.
Setting up a Containerized Application
We use the same Node.js app that we used in the first post in this series, but here we will be using the
docker branch. Grab a copy of the repository and checkout the
docker branch by running:
git clone git@github.com:nearform/azure-typescript-hapi.git git checkout docker
The main difference versus previous posts is that we have introduced a Dockerfile. We wont go into too much detail about how Docker works here, as it is out of scope. If you want to use your own Node.js application, and do not currently have a Dockerfile set up, here is a great how-to guide from the Node.js foundation. To make images you will need to have Docker running (you can find installation instructions here).
1. Log in to the Container Repository
First you will need to log in to the container registry you set up in the previous section. The Azure CLI provides a convenience command for logging into Azure Container Registry registries for you. Simply run:
az acr login -n <container-registry-name>
2. Make an Image of the Application
Make an image of the application to store in our repository. To do this
cd to the application directory, and run the command:
docker build -t <image-name> .
This builds an image of the current directory using the applications Dockerfile, and tags it as image name provided. You can check that your image has built successfully by running the following command:
docker run -it --rm -p 3000:3000 <image-name>
You should see your application running on
localhost:3000. In the terminal window use
Ctrl-C to terminate your application.
Note: If you are not using the sample application, you will need to change the port number (3000) to whichever port your application exposes.
3. Tag the Image
Tag this newly created image to associate it with our new remote repository on Azure. Use the following command:
docker tag <image-name> <container-registry-name>.azurecr.io/<image-name>
Note: Azure Container Registry supports multi level namespaces. We could equally tag our image with
<container-registry-name>.azurecr.io/my/new/image. This allows more complex hierarchical organisation of images.
4. Push the Image to the Remote Repository
Run the following command to push the image to the remote repository (this may take a while on slower connections):
docker push <container-registry-name>.azurecr.io/<image-name>
Check it is stored in our registry by running the command:
az acr repository list -n <container-registry-name>
Deploying Our Application
At this stage we have created a containerized app, and successfully uploaded it to our newly created container registry. The last thing to do is to deploy our app to our Azure Web App instance (which we created earlier). To do this we must change a few of the settings for our Web App instance to let it know where to find our new container registry.
Note: From here we will switch to using the abbreviated options
-g for resource group and
-n for app name for brevity.
1. Change Container Registry Settings
Change the container registry settings for the Web App instance with the following command:
az webapp config container set -g <resource-group-name> -n <app-name> \\ --docker-registry-server-url https://<container-registry-name>.azurecr.io \\ --docker-custom-image-name <container-registry-name>.azurecr.io/<image-name>
Note: Here Azure automatically includes your registry credentials because we are using an Azure registry. If you are using a non-Azure private registry, you will also need to set your credentials with the
--docker-registry-server-user and
docker-registry-server-password options.
Tip: If you create a new web app at this stage, you will need to create it with dummy container data (via the
--deployment-container-image-name or
--runtime options) before you set up a custom container registry. This seems like an oversight in the CLI design. We raised this with Microsoft and they told us that theyre currently working on improving the CLI in this area, as well as better documentation around this feature, so watch this space!
2. Assign a Port
We tell our Web App instance which port our application exposes, so that this port is exposed to external traffic by Docker. For our sample app, this is port 3000. Run the following command:
az webapp config appsettings set -g <resource-group-name> -n <app-name> --settings WEBSITES_PORT=3000
Now with the settings changed, restart your application using the command:
az webapp restart -g <resource-group-name> -n <app-name>
az webapp browse -g <resource-group-name> -n <app-name>
Et Voilà, your newly deployed container is available in your browser.
Tip: If your browser is showing an error message at this point, try running your Docker image on your local machine. Pull it from your repository with the command:
docker run -it --rm -p 3000:3000 <container-repository-name>.azurecr.io/<image-name>
You can also inspect logs for your application with the command:
az webapp log tail --name <app-name> --resource-group <resource-group-name>
To recap, we carried out the following tasks:
- Set up a private container repository with Azure Container Registry,
- Pushed our own container images to it,
- Used those images to deploy our own Azure Web App.
So far so good! However, most projects these days look to deploy on an increasingly frequent basis, and it would be great to be able to automate this. Thankfully Azure supports various continuous deployment options. In the next section we will look at one in particular: using webhooks with our Azure Container Repository to automate new deployments of our application.
Continuous Deployment with Azure Container Registry
To set up continuous deployment from our registry, we simply need to set up a push webhook to our Web App instance to let it know a new image is available. The first thing we do is enable the continuous deployment setting for our Web App instance with the command:
az webapp deployment container config --enable-cd true -g <resource-group-name> -n <app-name>
The return value from the above command gives a setting with the name
CI_CD_URL which contains a long URI looking something like:<app-name>:ciHitjfZQ3CyDmTYTraLQKLeeca1H5KwhxukHauG2Ts5yPotP0JTq4EAFJNN@<app-name>.scm.azurewebsites.net/docker/hook
This is the inbound URI for webhooks and we need it to set up the webhook from our registry. You can always re-fetch this URI later with the command:
az webapp deployment container show-cd-url -g <resource-group-name> -n <app-name>
With this URI go ahead and create a webhook from your registry using the following command.
Tip: Wrap your URI in single quotes as they tend to include a $ character.
az acr webhook create -n <webhook-name> -r <container-registry-name> --uri <cd-uri> --actions push
Note: If you try and create more webhooks than your plan allows, you might see an unhelpful error message along the lines of
'unicode' object has no attribute 'get'. Running the command again with the
--debug flag gives more information on the underlying problem.
The above command sets up a push action webhook, which fires when you push new images to your registry. You can read more on Docker webhooks here. Now that you have configured a webhook, any changes you push up to your registry are automatically released to your web app; we have a basic CD pipeline in place. Go ahead and test this by making some changes to your original Node.js app, and then rebuilding and pushing the image using the commands:
docker build -t <image-name> . docker tag <image-name> <container-registry-name>.azurecr.io/<image-name> docker push <container-registry-name>.azurecr.io/<image-name>
Your updated app is deployed automatically.
Tip: If you dont see changes deploying automatically, you can check your webhook is firing, and inspect request and response objects, in the Azure portal.
Cleanup
We have reached the end of our example. With any luck you will now have a containerized application of your choice deploying automatically to Azure. In order to tear down the infrastructure we have used for this example, we can simply delete the resource group that contains it. This will delete all resources associated with that group, and delete any service plans that now have no linked resources. Delete a resource group by running the following command:
az group delete --name <resource-group-name>
Note: If you are using an existing resource group, you can just delete the specific resources we have created via the Azure portal.
Summary
We have successfully deployed a containerized application using Azure Web Apps for Containers, and set up a simple continuous deployment pipeline for that application. From this platform we can go on to make use of the other built-in features of Azure Web Apps, such as on-demand vertical and horizontal scaling, and performance testing. However these go beyond the scope of this post.
We have covered the basics of the Azure CLI commands for interacting with web application containers, and container registries. Full reference documentation is available here. Although we have chosen to use the CLI for this example, we could equally have set up this architecture using the GUI available at the Azure portal.
Supporting custom container deployments is a big step forward for the Azure Web App product. For many projects it represents the sweet spot between a typical platform-as-a-service offering and wrestling with full blown container orchestration.
As the service has only recently achieved general availability, there are a few rough edges with the documentation and CLI. However our interactions with the Microsoft team make it clear theyre working hard to squash bugs and refine an already top-notch developer experience with Web Apps for Containers.
Finally it’s fantastic to see Microsofts continued commitment to open source software in general and Docker in particular.
|
https://www.nearform.com/blog/deploying-containerized-node-js-applications-with-microsoft-azure/
|
CC-MAIN-2019-22
|
refinedweb
| 2,931
| 50.87
|
Response to June 2012 article:
The article is coming up on its first birthday, but it’s on the web, and I found it by mistake while searching for an unrelated topic.
The argument
The argument of the author, a teacher of Java, was that the Django urlconf system is too verbose and unwieldy when dozens, or even hundreds of urls exist in a single app.
I can deal with [“large and unwieldy”] through good practices, but who wants to write a line in the urls.py file each time a page is added? Certainly not me.
The solution, he states, is to make one master url pattern to rule them all, where the hypothetical “.dj” extension stands in place of “.html” for the sake of demonstration:
urlpatterns = patterns('', url(r'^account/(?P<path>.*)\.dj(?P<urlparams>/.*)?$', 'account.views.route_request'), )
And a generic view function to rule them all:
import sys from django.http import HttpResponse, HttpResponseRedirect, Http404 from django.shortcuts import render_to_response def route_request(request, path, urlparams): parameters = urlparams and urlparams[1:].split('/') or [] funcname = 'process_request__%s' % path try: function = getattr(sys.modules[__name__], funcname) except AttributeError: return render_to_response('account/%s.dj' % path, { 'parameters': parameters }) return function(request, parameters)
This way, the effort of designing urls can be shipped off to a single top-level view, which dynamically looks up an existing module function in the pattern
"process_request__urlpathname". If the function lookup fails, it looks for a template named
"account/urlpathname.dj" and renders it.
In the case of the “process_request__%s” secondary view functions, these all exist in the views.py module.
The problems
Philosophy
From The Zen of Python:
Explicit is better than implicit
A proposal for a master view that executes arbitrary functions from an app is effectively contrary to the entire design philosophy. In order for a developer to know what his app can serve to a client, he has to tediously sift through two separate file locations:
- an unspecified number of sub-folders of template files
- the views.py module
Django’s url system is designed with a couple of very specific features:
- urlpatterns explicitly allow access to executable code.
- urlpatterns can be mounted via the provided
include()utility, allowing fancy features such as url namespacing and a plural number of instances of a reusable third-party app.
- urls have explicit names which can be reversed by the
django.core.urlresolvers.reversefunction, allowing developers to refer to urls by name, not by ephemeral hardcoded paths.
- The developer doesn’t automatically publish every function in their views.py namespace. This includes imported names, ranging from utilities to Django base classes. Clients could gain direct execution access to arbitrary Django framework callables, which will certainly raise 500 errors. Building in safety mechanisms to prevent such executions is completely counterproductive, a symptom of the sickness just released into wild.
The author’s function views would ultimately look a lot like normal Django function views (although even at the time of his writing, class-based views were already available in production-ready Django releases.) Class-based views solve a number of common problems, providing common workflows for form submission, object listing, creation, modification, deletion, etc. Class-based views offer a potentially simplified scaffolding for almost every type of view.
The fallback “automatic view” templates are basically views in their own right, which complicates matters further. This provides the developer with a convoluted way to avoid writing a one-line function (or a one-line urlpatterns entry grace à class-based generics), but that’s the end of benefits.
App design
The foundation of the author’s argument is that writing urls is tedious and painful. Done incorrectly, I’m sure it is. But then you’ve got nobody to blame but yourself.
Done correctly, views do simple straight-forward tasks that are easy to discern. Placing them in the app’s urls should be a trivial matter. Best of all, you get to assign a name to the view that can be referenced in templates and other apps without needing to know anything about url structure or template names. The site’s literal url regex could be completely redesigned to be more beautiful, and the url names themselves don’t have to change. Not a single template file needs to suffer a giant find-and-replace.
Url names are really quite awesome. You can assign the same name to multiple views, where an argument list disambiguates which specific view should be selected, almost reminiscent of the way compiled languages provide function overloading.
Since the author’s proxy-dispatch view swallows up the entire url path by liberally matching everything (and even tries to match GET parameters, the sure sign of a PHP-influenced workflow), url names become impossible. You’ve just given up one of the most beautiful features of a framework that operates at a higher level than the server’s ruddy file system.
Especially if someone ends up with an unholy number of urls in one app, I’m not sure what advantages a person thinks they’re gaining by throwing away url names. Now you have no choice but to hard code filesystem-like url paths into every template and call for a redirect.
I suspect that the author hasn’t had the pleasure of breaking up apps into smaller units of work. One thing I’ve learned, and after several years of Django development I keep on learning, that smaller apps ease the mind. You don’t want to go crazy, but creating a handful of smaller apps that rely on one another is a good way to keep code modular and flexible. No app should have 50 urls right in a line in its urlpatterns. Something’s wrong in that case.
Some of the redundancy can even be removed by nesting
include() calls right in the urlconf:
urlpatterns('', url(r'^account/', include(patterns('', url(r'^$', my_profile), url(r'^password-reset/$', reset_password), url(r'^(?P<username>\w+)/$', include(patterns('', url(r'^$', view_profile), url(r'^message/$', message_user), # ... and on ))), ))),
Of course, The Zen of Python also politely states that Flat is better than nested, so if the nesting gets too crazy, you can assign multiple module-level url patterns and graft them into place at the end.
Security
Although the author makes concessions to security by prefixing function lookups with the rather verbose
"process_request__" prefix, security hasn’t magically found the front door and left you alone. A really bizarre amount of effort would need to go into normalizing client url requests. Even if we’re talking about the fallback template “flatpages” (there was an app for that already, although I never liked it much), the users can literally request any template in your entire project, whether it was meant to be served as a direct-to-response style view or not.
In the author’s example, he uses a “.dj” suffix for his magic fallbacks, but implies that he would rather use “.html” in production:
In my site, I simply use .html because every page in my site is dynamic; both color-coding editors and users like seeing .html in the filename.
Neverminding for a moment that Django template files themselves previous to being dynamically rendered already use the .html suffix, and diametrically opposing the practice of fake url suffixes, the Django tutorials petition the users:$', 'polls.views.index'),
But, don’t do that. It’s silly.
Just imagine what stupidity the a client can get away with, requesting templates that weren’t meant to be accessed directly:
/account/includes/_sidebar.html /account/../../../../../??? /account/admin_view.dj /account/password_reset.html
The point is: who on earth even knows what the system would do in each of these cases? And I include the original developer in that question. Suddenly no template is safe, especially if “.html” is used in the url pattern. Because the purpose of direct-to-template fallback views was to omit the step of assigning a view function or explicit url, there’s no opportunity for the system to verify that the requesting user is even logged in.
The author’s advice: templates that shouldn’t be directly accessed by an anonymous client should have a separate file suffix, such as “.hdj” for a “hidden” django template. Great. The layers of complexity are starting to lay it on thick here.
With the normal Django urlpatterns system, it’s abundantly clear what each of those url requests would do: nothing. Not unless the urlpatterns say so.
That’s security.
Don’t make this more like Java
Instead of looking in a views.py file, my controller looks for view functions in a views/ directory. For example, the URL /account/test.dj in my system goes to /account/views/test.py and looks for a standard method called process_request() within the test.py module. This allows me to split my view functions across many files rather than a huge views.py file.
Not only does the url router actually complicate the url-resolving process for the human mind, it just turned every view into its own module with one function per file. Nice try, Java, but I see what you’re doing there.
Nevermind that every view is probably using the same imports, or that a nicely designed app that requires a lot of view functions would use a views/ directory with just a few sub-modules such as “views/management.py”, “views/messaging.py”, etc. One measly function per view is a bit much, isn’t it?
After all, if a view is hulkingly large enough, maybe it does too much? Maybe the complicated logic it contains should be extracted to a cleaner set of utility functions available to the view?
A point emphasized in every version of Two Scoops of Django: Best Practices that I’ve seen is to keep really heavy logic out of the view functions, whether you use function-based or class-based views. The connection from view function to template file should be thin. If the view is handling complex deletion mechanics for a model, maybe the deletion logic belongs on the model class itself?
I suspect the author was a big fan of code in views, because he seems to be a big fan of code in template files, too, having replaced the Django template renderer with the Mako one. There are just way too many places for code to live in a project like that. It’s the sign of someone that believes they’ve already found enlightenment, and they’ll die trying to unmake core mechanisms. They’re free to do so, but man. I’d hate to live and breathe in a modified workflow like that. It sort of screams “one man team” to me. No team of individuals wants there to be a bunch of non-standard homebrew mechanisms living in their Django.
It just causes more problems than it solves. No, really. It does.
I’ve been there
I’ve been there. I’ve made these mistakes. I’ve written url helpers that automatically assigned function names as url names, to simplify my url burden. I gave the helper an automatic class-based handler that automatically called
MyView.as_view() for me. I couldn’t be bothered with calling
.as_view() on every class. I gave the helper an automatic template name finder, so that the function name translated directly to an .html file that I could find. Then I devised some crackpot method of translating a double underscore in function names to directory separators, turning the view function name
account__profile to the template
account/profile.html.
Then someone looked at my code and kind of grunted and sighed a bit. Finally they said, “Dude I don’t know what you’ve done to Django, but I don’t know what’s going on here.” I explained the mechanism, defending the simplicity it introduced.
Eventually I lost that debate. Not because somebody else beat me up for believing differently than they, but because I recalled an important lesson I’ve learned time and time again:
There was this time I had a friendly competition with a buddy of mine to write a database driver for some software we were writing at work. I spent all night developing my awesome strategy and class hierarchies and interfaces and abstract base classes. By midnight, I had code that just wouldn’t function anymore. The compiler was screaming at me about unknown generic types K and L and all kinds of unfixable problems. Nothing I did, including the wonky work-arounds on the Internet, ultimately fixed the problem.
I went to work the next day and moaned the whole afternoon about how dumb the language was, while watching my friend run tests with his driver on some example data. I knew that his code was less beautiful than the code I had written, even though mine was broken beyond help, but it didn’t matter, because he turned and said, half joking yet sincere:
I just try to not write code that won’t work.
It might sound rigid and show too much affinity for tradition, but if you misunderstand your tools and subvert their strengths just to find some PHP or Java philosophical comforts, you’re going to have a bad time.
Thank you.
|
http://blog.timvalenta.com/2013/01/why-django-doesnt-need-better-urls-py/
|
CC-MAIN-2014-52
|
refinedweb
| 2,216
| 62.68
|
ok first thanks for helping.
I'm new to programming in general and I decided I'd start with C. I found a pretty sweet tutorial and I'm plugging along but I have a question.
I see in the tutorial that at the end of an "if" block the tutorial has the command "return 0;"
I have written a program below just to kind of get my feet wet and I omitted the "return" line intentionally. It compiles and runs just fine so my question is what does the return line do, do I really need it and how do I use it and it's output later on for better coding?
Thanks again
#include <stdio.h>
int x, y, z;
main()
{
/*get some data here*/
printf("\nEnter a number: ");
scanf("%d", &x);
printf("\n Now enter another number: ");
scanf("%d", &y);
printf("\nYou entered %d, and %d\n", x, y);
/*test values here*/
if (x == y)
printf("%d is equal to %d\n",x,y);
else if (x > y)
printf("%d is greater than %d\n",x,y);
else
printf("%d is less than %d\n",x,y);
}
|
http://cboard.cprogramming.com/c-programming/106859-new-c-return-question.html
|
CC-MAIN-2015-06
|
refinedweb
| 190
| 72.6
|
Creating a more advanced property editor
Note: There is an updated version of this blog post for EPiServer 7.5 here.
In this fourth and last blog post in my series on how to extend the user interface in EPiServer 7 we will take a look how we can build a more advanced editorial widget. We will use two of the built in widgets in Dijit that creates a select-like ui component that will present possible alternatives to the editor as they type which will give us the following editor widget:
There are two widgets in Dijit that are very similar to each other:
- FilteringSelect forces the user to use one of the suggested values.
- ComboBox lets the user type what she want’s and merely gives suggestions. Perfect for tags for instance.
It’s possible to bind the widgets to either a list of values or to connect it to a store that will search for alternatives as the editor types. We’ll go for the later in this sample.
Implementing the server parts
First, we add a property to our page type and mark it with an UIHint attribute that points to a custom editor identifier.
[ContentType(GUID = "F8D47655-7B50-4319-8646-3369BA9AF05E")]
public class MyPage : SitePageData
{
[UIHint("author")]
public virtual string ResponsibleAuthor { get; set; }
}
Then we add the editor descriptor that is responsible assigning the widget responsible for editing. Since we add the EditorDescriptiorRegistration attribute this means that all strings that are marked with an UIHint of “author” will use this configuration.
using EPiServer.Shell.ObjectEditing.EditorDescriptors;
namespace EPiServer.Templates.Alloy.Business.EditorDescriptors
{
[EditorDescriptorRegistration(TargetType = typeof(string), UIHint = "author")]
public class EditorSelectionEditorDescriptor : EditorDescriptor
{
public EditorSelectionEditorDescriptor()
{
ClientEditingClass = "alloy/editors/AuthorSelection";
}
}
}
Before we head over to the client parts let’s add the store that is responsible for giving the results to the client:
using System;
using System.Collections.Generic;
using System.Linq;
using EPiServer.Shell.Services.Rest;
namespace EPiServer.Templates.Alloy.Rest
{
[RestStore("author")]
public class AuthorStore : RestControllerBase
{
private List<string> _editors = new List<string>{
"Adrian", "Ann", "Anna", "Anne", "Linus", "Per",
"Joel", "Shahram", "Ted", "Patrick", "Erica", "Konstantin", "Abraham", "Tiger"
};
public RestResult Get(string name)
{
IEnumerable<string> matches;
if (String.IsNullOrEmpty(name) || String.Equals(name, "*", StringComparison.OrdinalIgnoreCase))
{
matches = _editors;
}
else
{
//Remove * in the end of name
name = name.Substring(0, name.Length - 1);
matches = _editors.Where(e => e.StartsWith(name, StringComparison.OrdinalIgnoreCase));
}
return Rest(matches
.OrderBy(m => m)
.Take(10)
.Select(m => new {Name = m, Id = m}));
}
}
}
We inherit from the class EPiServer.Shell.Services.Rest.RestControllerBase. Now we can implement the REST-methods that we want. In this case we only implement GET which is used for fetching data but we can also implement POST, PUT and DELETE. EPiServer has also added SORT and MOVE to the list of accepted methods (these are not part of the REST-specification but rather the WebDav specification). To register a service end point for our store we add the attribute RestStore(“author”). We will add some client side logic to resolve the URL for this store.
Implementing the client
Setting up the client side store that works against the server side REST store feels logical to do in a module initializer so let’s make sure that we have a client module initializer registered in our module.config file:
<?xml version="1.0" encoding="utf-8"?>
<module>
<assemblies>
<!-- This adds the Alloy template assembly to the "default module" -->
<add assembly="EPiServer.Templates.Alloy" />
</assemblies>
<dojoModules>
<!-- Add a mapping from alloy to ~/ClientResources/Scripts to the dojo loader configuration -->
<add name="alloy" path="Scripts" />
</dojoModules>
<clientModule initializer="alloy.ModuleInitializer"></clientModule>
</module>
And we add a file named “ModuleInitializer” in the ClientResources/Scripts folder:
define([
// Dojo
"dojo",
"dojo/_base/declare",
//CMS
"epi/_Module",
"epi/dependency",
"epi/routes"
], function (
// Dojo
dojo,
declare,
//CMS
_Module,
dependency,
routes
) {
return declare("alloy.ModuleInitializer", [_Module], {
// summary: Module initializer for the default module.
initialize: function () {
this.inherited(arguments);
var registry = this.resolveDependency("epi.storeregistry");
//Register the store
registry.create("alloy.customquery", this._getRestPath("author"));
},
_getRestPath: function (name) {
return routes.getRestPath({ moduleArea: "app", storeName: name });
}
});
});
In the initializer we call resolveDependency to get the store registry from the client side IOC-container. The store registry has a method to create and register a store that takes an identifier for the store as a string as well as an URL to the store. In our case we resolve the URL to the store with the name of the shell module and the registered name of the store “author”.
Note: In this case we are using the “built in” shell module in the site root which is simply named “App”.
So, lets go ahead and create the actual editor widget:
define([
"dojo/_base/connect",
"dojo/_base/declare",
"dijit/_CssStateMixin",
"dijit/_Widget",
"dijit/_TemplatedMixin",
"dijit/_WidgetsInTemplateMixin",
"dijit/form/FilteringSelect",
"epi/dependency",
"epi/epi",
"epi/shell/widget/_ValueRequiredMixin",
//We are calling the require module class to ensure that the App module has been set up
"epi/RequireModule!App"
],
function (
connect,
declare,
_CssStateMixin,
_Widget,
_TemplatedMixin,
_WidgetsInTemplateMixin,
FilteringSelect,
dependency,
epi,
_ValueRequiredMixin,
appModule
) {
return declare("alloy.editors.AuthorSelection", [_Widget, _TemplatedMixin, _WidgetsInTemplateMixin, _CssStateMixin, _ValueRequiredMixin], {
templateString: "<div class=\"dijitInline\">\
<div data-dojo-attach-point=\"stateNode, tooltipNode\">\
<div data-dojo-attach-point=\"inputWidget\" data-dojo-type=\"dijit.form.FilteringSelect\" style=\"width: 300px\"></div>\
</div>\
</div>",
intermediateChanges: false,
value: null,
store: null,
onChange: function (value) {
// Event that tells EPiServer when the widget's value has changed.
},
postCreate: function () {
// call base implementation
this.inherited(arguments);
// Init textarea and bind event
this.inputWidget.set("intermediateChanges", this.intermediateChanges);
var registry = dependency.resolve("epi.storeregistry");
this.store = this.store || registry.get("alloy.customquery");
this.inputWidget.set("store", this.store);
this.connect(this.inputWidget, "onChange", this._onInputWidgetChanged);
},
isValid: function () {
// summary:
// Check if widget's value is valid.
// protected, override
return this.inputWidget.isValid();
},
// Setter for value property
_setValueAttr: function (value) {
this.inputWidget.set("value", value);
this._set("value", value);
},
_setReadOnlyAttr: function (value) {
this._set("readOnly", value);
this.inputWidget.set("readOnly", value);
},
// Event handler for the changed event of the input widget
_onInputWidgetChanged: function (value) {
this._updateValue(value);
},
_updateValue: function (value) {
if (this._started && epi.areEqual(this.value, value)) {
return;
}
this._set("value", value);
this.onChange(value);
}
});
});
In short terms, what the widget does it to set up an dijit.form.FilteringSelect as an inner widget, feed this with a store from the store registry and listen to it’s change event. If we take a closer look to some of the parts of the widget we can see the code needed to connect the store defined in our initialization module to the inner widget:
var registry = dependency.resolve("epi.storeregistry");
this.store = this.store || registry.get("alloy.customquery");
this.inputWidget.set("store", this.store);
Requiring a module
When EPiServer starts a view not all modules and components are loaded. When the view is loaded, before we add a component (or gadget) we make sure that we have started the shell module that it is located in. In EPiServer 7.1 this can be done by calling the "epi/RequireModule" class with your module name, in this example:
"epi/RequireModule!App"
And we are done! When editing a page of the given type we can see how the editor changes suggestions as we type and entering an invalid value gives us an validation error:
Doing a simple search and replace in Author.js from FilteringSelect to ComboBox enables the editor to enter any value she likes:
Summary
This ends the series of how to extend the user interface of EPiServer 7. We have looked how to create components for the UI that can either be plugged in automatically or added by the user. We have created a component using either web forms or dojo/dijit. Using jQuery-style gadgets ala EPiServer 6 still works pretty much the same way with the difference that these also works in the EPiServer 7 CMS edit view.
We have also looked how you can add attributes to your models to control both presentation and editing. We have looked into creating an editor using Dojo/Dijit. There are a few examples of the in the new Alloy template package and there is more information about how to extend the user interface in the User Interface section of the EPiServer Framework SDK:.
Extending the User Interface of EPiServer 7
Plugging in a Dojo based component
Creating a content search component
Creating a more advanced property editor
Hi Linus,
This is a good article, I created a property editor, using the above code. I am getting a Javascript Error as below pointing to the requiremodule.js
SyntaxError: function statement requires a name
load: function (/*String*/id, /*function*/require, /*function*/load) {
Am I missing something, or what could be the problem?
Best regards
Kiran
@Kiran: Make sure that you have added a module name when you are requiring the require module in your widget header:
"alloy/requiremodule!App" ("App" is the module required in this case. If you are using the Alloy templates as a base is should probably be "Alloy" instead. Check your module.config file in the site root for the correct module name)
Hi Linus, Thanks for the quick response.
in the module.config
I am sorry I did not understand properly.
I have done the following steps
1. Created new Rest store class:: [RestStore("ClearCache")]
2. Created property and descriptor called "ClearDisciplineCache". My client class in this case is "diamondleague.editors.ClearDisciplineCache"
3. Created a script called "ModuleInitializer.js", copied your code and renamed the widget name to "diamondleague.ModuleInitializer" Also "alloy.customquery" to "diamondleague.customquery"
4. Added the line
5. Created the actual widget, copied the code and changed "alloy/requiremodule!App" to "diamondleague/requiremodule!App", "alloy.editors.AuthorSelection" to "diamondleague.editors.ClearDisciplineCache" and "alloy.customquery" to "diamondleague.customquery"
6 Then added requiremodule.js and copied your code to this file.
After all these steps, when I go to the cms edit mode, I get the javascript error as mentioned above.
I am not sure what I am now missing.
Thanks you for the support
Kiran
Tip: Often the client resources and config files get cached, was double-checking everything and it still didn't work. Had to do an iisreset to force all the files to be loaded again (especially true if you change JavaScript files, make sure they're not cached in the browser).
Hi Linus
I have followed this example and got the property to work with the RestStore. But one thing I cant get my mind over is why the value i selected in the property isn't saved when i publish the page. I can see that the value is being stored in the database but the property is not being pre-populated with that value when I revisit the the page in edit mode.
Is it the FilteringSelect that cleans the value when it loads? Or do I have to get the value myself and populate the property manually?
Thanks in advance!
/Fred
I don't know if it's 7.1 related (new dojo version?), but ended up getting very strange values when I followed this example step by step. Instead of getting the names I got some weird numeric values. When I instead played arround with Memory-stores, I noticed that the property to use for value is "id" not "value". So I changed that in my REST store and I then got the values I wanted. That could be a tip for anyone that's stuck.
@Tomas Eld: I love you! Solved my problem.
This really doesn't work. I've followed this step-by-step, but as soon as I include the EditorDescriptor-class in my project, I cannot navigate to "Forms editing" anymore. No error-message and no exception i firebug.
I have tried this too. Complete waste of time. This does not work in EpiServer 7.1. Symptoms are just same as Calle have. I'm trying to migrate old and exotic custom property with autocomplete from ancient 4.61 Epi in latest Epi. This is the most hardest nut to crack.
After quite a few troublesome hours I finally got it to work! What I did was that I placed RequireModule.js in ClientResource/Scripts folder
and in my editor widget I changed dijit.form.FilteringSelect to dijit/form/FilteringSelect.
In hindsight I think that moving RequireModule.js to ClientResource/Scripts folder would have been enough cause when I change my editor back to dijit.form.FilteringSelect, it still works. (EPiServer 7.1 and win 7 64 bit)
I have updated the code to use the built in RequireModule class in EPiServer 7.1 (epi/RequireModule). In EPiServer 7.5 it will be possible to do this with configuration in the module.config file. I also updated the class name syntax to use the "/" pattern to separate namespaces and classes.
I’m still struggling with this one. Sometimes the forms editing mode hangs, like described by @Calle Bjernekull on Friday(!) 13th of September 2013. Emptying all kinds of cache I can get forms editing to work again, but then; every time I do a "hard" refresh I get the same error as described by @Fred Lundén. The property is not being pre-populated with that value when I revisit the the page in edit mode. In view mode the value seems to be fetched just fine.
Everybody else happy with auto suggest properties in EPiServe CMS 7? I’m hours into trying to fix this and I thought it would be a simple one, having this blog post in mind.
Some hard facts from my testing/development environment:
Testing mainly in Crome, and some in IE.
Episerver 7.1 site running on IIS Express using EPiServer assemblies with version numbers ending with .24.
We are running the site off these NuGet packages: "EPiServer.CMS.Core.7.0.586.24" and "EPiServer.Framework.7.0.859.24".
In ~/modulesbin we have assemblies matching version number 2.0.86.0 (Packaging, Packaging.UI, Shell.UI) and 2.0.79.0 (EPiServer.Cms.Shell.UI.dll)
I have updated an updated version of this editor targetted for EPiServer 7.5 that can be found here:
Hi Linus, I have two dojo widgets in my project, but only 1 module.config file. Within that module.config file I have:
But only the first initializer method is ever executed...is there a way of supporting multiple widgets in the same module.config file? Or should I have multiple files?
@Higgsy: As far as I can see from the code, only one clientModule element is allowed (or at least supported) in a module.config file. You can have as many widgets as you'd like, but if you have the need for several modules these should be separated into separate structures.
@Linus - thanks, this is now working.
This code still does not work for me. It just stucks loading edit view and nothing happens. Firebug gives error message:
"A script on this page may be busy, or it may have stopped responding. You can stop the script now, or you can continue to see if the script will complete.
Script:"
My module.config is identical as in example code. I just changed my project name as assembly name.
EditorDescriptors class is in same file as my pagetype. This file uses namespace, where the RestStore class is.
ModuleInitializer is saved as ClientResources/Scripts/moduleinitializer.js and widget is in same folder (app.js). All the files are identical as in the example. Just namespaces are modified.
I have EPiserver 7.1 and Episerver.dll version is 7.0.586.16
Is there any kind of way to use jQuery autocomplete in here? Massive amounts of dojo files(=slow), no comprehensible error messages. This Dojo tuning feels just frustrating.
@Matti: Have you seen the updated the simplified blog post ()?
Regarding jQuery you should be able to use that. There is even an older jQuery version (1.3 I think) loaded in the user interface that you potentially can use depending on what you want to use in jQuery. Otherwise you can always load a separate version of jQuery using the noconflict flag.
Thank you for you prompt answer. I haven't tried that simplified solution yet. Better solution for our project would be using jQuery, because autocomplete using several words and webmethods are already done to keyword search. But how can I transfer this funcitonality to custom page property editing? How can I add jQuery library reference to edit side head tag? I must also add some jQuery script in body tag. Should I override CreateEditControls method to add script or is there better alternative?
Got problem with javascript errors, read more here:
Any clue why?
|
https://world.optimizely.com/blogs/Linus-Ekstrom/Dates/2012/11/Creating-a-more-advanced-property-editor/
|
CC-MAIN-2021-39
|
refinedweb
| 2,782
| 50.12
|
ATTENDEES 39/35 Active Data Exchange Shane Sesta alternate Akamai Technologies Mark Nottingham principal Allaire Glen Daniels principal AT&T Mark Jones principal Bowstreet Alex Ceponkus alternate Canon Jean-Jacques Moreau principal Canon Herve Ruellan alternate DataChannel Brian Eisenberg principal Engenia Software Jeffrey Kay principal Ericsson Research Canada Nilo Mitra principal Fujitsu Software Corporation IDOOX Jacek Kopecky principal Intel Randy Hall principal Interwoven Mark Hale principal IONA Technologies Eric Newcomer alternate Jamcracker David Orchard principal Library of Congress Ray Denenberg principal Lotus Development Noah Mendelsohn principal Matsushita Electric Ryuji Inoue principal Microsoft Corporation Henrik Nielsen principal Mitre Marwan Sabbouh principal Mitre Paul Denning alternate Netscape Ray Whitmer alternate Novell Scott Isaacson principal Oracle David Clay principal Philips Research Yasser alSafadi principal Rogue Wave Patrick Thompson alternate SAP AG Volker Wiechers principal Sun Microsystems Marc Hadley principal Tibco Frank DeRose principal Unisys Lynne Thompson principal Vitria Technology Inc. Waqar Sadiq principal W3C Yves Lafon team contact W3C Hugo Haas alt team contact WebMethods Randy Waldrop principal AUTOMATICALLY EXCUSED Active Data Exchange Richard Martin principal Allaire Simeon Simeonov alternate AT&T Michah Lerner alternate Bowstreet James Tauber principal Engenia Software Eric Jenkins alternate Fujitsu Software Corporation Masahiko Narita alternate IBM Fransisco Cubera alternate IDOOX Miroslav Simek alternate Interwoven Ron Daniel alternate IONA Technologies Oisin Hurley principal Library of Congress Rich Greenfield alternate Microsoft Corporation Paul Cotton alternate Netscape Vidur Apparao principal Oracle Jim Trezzo alternate Philips Research Amr Yassin alternate Rogue Wave Murali Janakiraman principal SAP AG Gerd Hoelzing alternate Sun Microsystems Mark Baker alternate Tradia Erin Hoffman alternate Tradia George Scott principal Unisys Nick Smilonich alternate Vitria Technology Inc. Richard Koo alternate REGRETS Commerce One Murray Maloney alternate Commerce One David Burdett principal Compaq Yin-Leng Husband principal DevelopMentor Don Box alternate DevelopMentor Martin Gudgin principal ABSENT WITHOUT EXPLANATION Cisco Krishna Sankar principal Compaq Kevin Perkins alternate DaimlerChrysler R. & Tech Andreas Riegg alternate DaimlerChrysler R. & Tech Mario Jeckle principal Data Research Associates Mark Needleman principal Epicentric Dean Moses alternate Epicentric Bjoern Heckel principal Informix Software Charles Campbell principal Informix Software Soumitro Tagore alternate OMG Henry Lowe principal Progress Software Peter Lecuyer alternate Software AG Dietmar Gaertner alternate Software AG Michael Champion principal Xerox Tom Breuel primary XMLSolutions Kevin Mitchell principal XMLSolutions John Evdemon alternate
See also meeting details (Member only).
See the timeline.
Discussion of I18N - Contact David Clay if interested in discussing further
12 Usage Scenarios - Discussion of how we handle
Henrik - spend 3 hours, what we get thru is what we get thru
Chair proposed spending a maximum amount of time on each one, Group agrees
We have 7 groups of scenarios:
Glen suggested going thru each one, checkpoint 1 hr. Agreed
Glen: whole message signed in DS9
Henrik: suggested clarification, DS9 may be sufficient
David Clay: DS9 doesn't talk about encrypting header, in 15 there are other things: there is positive ack. Suggested talking about encrypting headers
Mark Jones: DS10 should be in here too. Has same mix of issues
Chair: 6 is accepted. If we accept 10 then we dispose of 9 (dup). The question is whether we want completeness of others.
Jeff Kay: DS15 has notion of return receipt. seems to be above/beyond DS6/10 (which cover non-repudiation).
Chair: we've covered receipt, signing, and encryption somewhere. We don't have to do usage scenarios for high-level abstract piece. We're not trying to do design work here.
Chair Proposal: Dispose of DS9 by saying it's a dup. Proposal to accept DS10 and to accept DS15.
Henrik: 9 is covered by 5 and 15
Glen: 5 is different
Jeff: ack implies successful data transfer and acceptance of higher-level agreements
Marwan: We should accept both
Henrik/Glen: we may want to rewrite scenarios to address different aspects
Frank: 15 covers other things, tpa, confidentiality
Jean-Jacques -
Dick Brooks: ebXML backs off on 15
Amended Proposal: Accept DS 10, remove DS9 (is a dup), discard DS15.
Frank: Accepting DS10 implies supporting encryption and signing are supported via different agents. Frank would like to restrict 10. Important things are signing and encryption, not ready to attack other things.
Dick: Clarification: wanted granular signature capability so you can change en-route header
Chair: Accept 10 as it is, do good faith effort to support all aspects of 10 and do mapping at end-of-day to see if we can/cannot support.
Decision (group consensus): Proposal passes to accept DS10 as S10, remove DS9 as duplicate and discard DS15.
Frank/Glen asked for clarification of DS:
Dick Brooks: ebXML has backed off on intermediary issues. When you have protocol that's intermediary aware, it gets very complicated. ebXML is becoming Point-to-point protocol. Intermediaries are black-box. Dick suggested that we may want to drop this (like DS15). In ebXML's case, they believe they have provided a point-to-point protocol in which recursion could be used to support intermediaries
Henrik: thinks point-to-point and intermediaries are two separate issues
Frank: items that aren't addressed by other S's (non-repudiation)
Dick: non-repudiation that intermediary received it and sent it along
Mark J: assigning a path and proving path is different that just basic intermediary support in protocol
Jeffrey Kay: we should keep this because this scenario would cover routing scenarios between namespaces.
Proposal: Promote DS11 to S
Chair suggests we accept DS11
Request for clarification on wording in text. regarding pointer to "DS11"
Henrik: "an intermediary forwards the message to the ultimate receiver. "
Proposal: remove real-life examples (last two sentences) of S11 (the real-life examples)
Decision (group consensus): DS11 is promoted to S11 with last two sentences removed.
Action Item 1: Henrik edited text dropping last 2 sentences
Mark Jones: described sending picture from digital camera to wireless network. Key is that data is embedded within SOAP message
Proposal by Mark N. Because it is worded more like a requirement than a usage scenario, proposal to rewrite DS19 as a usage scenario.
Proposal: Rewrite DS 19 as a usage scenario
Action Item 2: Mark Jones to Rewrite this as DS
Reworded:.
Proposal: Accept above text as S19
Decision (group consensus): DS19 is accepted as S19 with the following wording:
."
Discussion
Proposal (Glen Danials): Collapse DS20 and DS23
Proposal: Reword DS17 as a specific scenario
Noah: Issue is correllation of multiple requests
Mark J: LDAP directory requests which are non-blocking
David Clay: separate (asynch rpc request w/polling for one or more responses, asynch rpc with notification, rpc request/response.
Proposal David Clay: Take 3 categories listed above and make one case
asynch request with notification
Glen: Use DS17, remove notion of correllation id to say that must be able to glue set of messages together. Should be able to perform correllation.
Proposal: Reword DS 17
Intent is that an application should put its own correlation mechanism
Jeff K: Issue involves s sender/receiver not necessarily communicating on the same connection
Mark J: the scenario currently speaks to apps being able to insert app defined functionality
Action Item 3: Jeff Kay to propose new wording for DS17 (in background)
This item was left pending
Glen withdraws proposal to merge DS20 & DS23
Proposal (Glen): Accept DS20 and DS23 as two scenarios
Discussion on DS20
Vidur: "reporting the state of a device sounds like subscribing to event stream (Frank's comment).
Proposal (Vidur/Frank): remove entire "or" clause of DS20, DS23 stands as is
Amended Proposal: Accept DS20, after removing the or clause in its entirety, and accept DS23 as Scenarios
Discussion of taking notification out of DS23.
Amended Proposal: Accept DS20, after removing the or clause in its entirety, and accept DS23 as Scenarios
Decision (group consensus): DS20 becomes S20 without or clause, DS23 accepted as is
Suggestion: delete second paragraph.
Vidur: delete
David Clay: include
John I: question regarding granularity. Is this elemental granularity or block level/header level. Does this mean the receiver must inspect in the case of mustUnderstand.
This is a scenario which describes incremental transmission.
John I: this will affect reliable messaging.
Glen: this will be needed by apps
Proposal: Accept DS 21 as S
Decision (group consensus). DS21 is promoted to S21.
Reworded by Jeff Kay
A sender sends a message to a recipient. The sender does not wish to maintain a connection to the recipient after transport of the message because the message requires a significant time to process (this could be because the transport protocol timeout is shorter than the message processing time or just because the sender is impatient). The recipient acknowledges the request but does not return a response to the request. The recipient processes the message. If the sender can be or wishes to be proactively notified, the recipient proactively returns the response to teh sender. If the sender cannot be proactively notified, the response is held until the sender initiates a request for the response from the recipient, at which point it is returned. The response may return either the anticipated response or a failure notification. The sender acks the receipt of the response...
Vidur: If the sender cannot be proactively.....sender initiates a request for the result from the recipient.
Mark: Recipient acknowledges request -> recipient may acknowledge request.
Glen: remove acknowledgement
A sender sends a message to a recipient. The sender does not wish to maintain a connection to the recipient after transport of the message because the message requires a significant time to process (this could be because the transport protocol timeout is shorter than the message processing time, because the return protocol is different, or just because the sender is impatient). The recipient does not return an immediate response to the request. The recipient processes the message. If the sender can be or wishes to be proactively notified, the recipient proactively returns the response to the sender. If the sender cannot be proactively notified, the response is held until the sender initiates a request for the response from the recipient, at which point it is returned. The response may return either the anticipated response or a failure notification.
Action Item 4: Jeff K to revise text again as 17a & b based on input.
Item left pending
Proposal: Drop DS14 and Accept DS810 As S
Decision (group consensus): DS14 is removed. DS 810 becomes S810
Mark N: This is a use case for an XML Protocol module.
Frank D: We need caching scenario
David Clay: neither represents adequately. DS809 doesn't cover enough.
Chair: suggest that what 809 covers is important so it should be accepted
Proposal: Promote DS809 to S809, DS24 remains DS
Amendment: to add intermediaries to support caching header (good until x)
Discussion
BizCo updates the online price catalog every morning at 8am, Therefore, when remote clients acccess their xp inventory service, clients and intermediaries may cache the results of any price queries until 8am the next day.
Decision (group consensus): DS809 is promoted to S809 with the addition of text discussing intermediary support for caching header, DS24 remains DS24. Proposal passes with above addition for intermediaries
Presentation by John Ibbotson, Dick Brooks, and Henrik Frystyk Nielsen
ebXML WG to address convergence of SOAP in TRP specs
See John Ibbotson presentation on ebXML Convergence with SOAP and Attachments.
See Dick Brooks's SOAP/ebXML convergence open issues presentation.
See Henrik's presentation of SOAP with Attachements.
Technical points have not all been resolved
Open issues remain
Proposal from Chair on how to proceed: Instruct AMG in XML Protocol WG to draft a way of describing things such as how items travel with message.Astract model includes description to come up with a vocabulary for components of a message and relationship between them. Some travel with message but don't necessarily travel with the message.
Suggestion from David Clay: XP Must consider delivering MIME binding
Henrik: Need envelope model and data representation model
Chair: First order of business is envelope model
Noah: Need to discuss links (things that travel with vs. things that just get pointed to for informational purposes only).
Chair Observation: We're in design space. This area (needs crisp definition) should be worked through AMG. Need model of attachments, of data that goes with or does not go with messages, and would like vocabulary of this within abstract model for future discussions.
Ray D: abstract model (what does this mean). Think we need to define which type of model we're talking about.
Proposal: Instruct AMG to develop a model for attachments and data that may or may not travel with messages. Also, the AMG should suggest a vocabulary for this model.
Decision (group consensus): AMG to develop a model for attachments and data that may or may not travel with messages. Also, the AMG should suggest a vocabulary for this model.
See slides.
This is an abstract processing model.
Stuart provided an overview of the abstract model characteristics:
One-way, 2-way request/response and Intermediary operation
Operations currently covered by AM include:
Request/Response Discussion
AMG is Trying to model operational semantics
Henrik: looks very much like an API. Need to explain to people how they can go off and write an XML Protocol module
Clay: Need more work in targeting and routing
Message Path and Targeting
There was discussion that included:
At this point a fire alarm sounded resulting in a building evacuation. The meeting reconvened briefly for formal adjournment.
See slides.
David F.: We have a 301a Conformance Requirement. We need a small group of people to go off and make a proposal.
There were no volunteers. David strongly suggested that "you have signed up for the WG, you have committed to do the work, we need volunteers." Hugo H and David C volunteered.
Action Item: Conformance subgroup to come back with a plan to present and discuss in the teleconf in 2 weeks, 3/14/01.
We will take the next 25 minutes continuing the discussion on the abstract model.
David F presented a slide that organized the areas for discussion around the AM. The text is as follows:
Proceeding with Abstract Model (AM) ----------------------------------- 1. Clarify nature of model - Describe SOAP, and Requirements - Integrate Glossary - Clarify implications with respect to implementation e.g. request-response and one-way transports 2. Sections to complete - Path and targetting section - Arbitrary attachments section - Describe SOAP with Attachments 3. Organisation - Small(er) group, WG to solicit proposals from AMG. - Publish AM as WD? Schedule?
David F: Personal preferences are to keep the AMG as a smaller group so that it is focused and nimble and and get some work done.
David C.: Are there template restrictions/guidelines for this type of document? David F.: Not really. There are 5 separate WDs from XML Query. We are at liberty to publish the AM in whatever format manner we want.
Henrik: Are these to be done in the AMG or now today in this meeting? David F: In the AMG, but let's have the WG discuss and give input into the AMG and decide on how to proceed for each of the 3 areas today.
Mark J.: Seems like we are pushing the more controversial issues into AMG (1-way/2-way, attachments, etc). Is this the right approach? What is the philosophy of our WG approach?
David F: The AM is the catalyst. They put the proposal on the table for the WG to discuss and reach resolution. Notice, we discussed but did not decide yesterday. Easier for AMG to crystallize the issues and proposals.
Stuart: The mailing list has all the issues. The subgroup's role is just to capture and synthesize. For example, Ray is not formally in the AMG but the is participating and is acknowledged for his contribution.
David F: The subgroup can come back with options for solutions.
David C: What about the issues lists? Should we fold into the AMG? Answer: Let's do that in parallel. For example, the SOAP discussion yesterday about request/response.
Is the AMG a bridge between the requirements and the spec? Yes, but it is a 2-way bridge.
Noah: Something on my mind. We need to worry about feature creep. Experience in Schema has shown that it is too expensive to work on them only to drop later. Let's make sure the AMG sees the design in the whole. As critical features come along, we could have them break out a subgroup to work on those. We need to limit the scope of the AMG.
John I.: AMG and the rest of the WG can decided what is important.
Noah: AMG with the WG manages an issue list about what is in and what is questionable.
Henrik: Could use some phone conf time to set the criteria for evaluating the deliverables from the AMG. It needs a charter.
David C: Charter could address the current issues: Security, What is a module? How is one specified and how is one plugged in?
Mark J: As the AM gets fleshed out, we should take the same approach we took for the requirements spec. Do we wait to the end for all or approve major sections and decisions along the way. Ans: We need to take the iterative approach.
Lynne T: We have the spec and the AM. The parallel schedule almost shows higher priority to the spec. We have a conflict in making the AM public but slowing it down from input from the outside.
Glen: Lots of people are here to make sure we just don't rubber stamp SOAP 1.1. The AMG is a potential foundation for understanding people's issues.
Stuart: AMG is not about designing XML protocol. It is more about shape the structure for design.
Noah: 2 quick responses. AM is a great thing for the reasons discussed. But be a good watch dog for feature creep. If we are doing the AM for any other reason than making the spec a better spec, then we are doing it for the wrong reasons. It should not be done second. It captures some of the reasoning behind the spec.
Ray D: My primary goal for the AM. When we have a spec, we need a way to sell to the community. The AM will be the tool I use for that. We should not put it on the back burner.
Noah: Arguing against letting it go on the slower schedule.
Glen: Can we get back to the AM rather than have a meta-discussion about the AM
David F: We heard reasons for and against the goodness of the AM. We have heard that there seems to be some agreement about the need for the AM. It helps with rationale, scope, selling later on, etc. From here? I am not in favor of a charter, but we should have some statement about the direction. Come back with a codification.
John I: We can validate the model by using the usage scenarios.
Henrik: I was not suggesting something so formal as a charter. One major issue is to reconcile the glossary. We need a common set of terms to our discussions and specs. The purpose of the AM is to give us terms and and a framework for our design. For example, it can give structure to what a module is and how it get specified and
Dick: My personal view. Gap analysis relative to SOAP and our requirements and our scenarios to identify how much additional work.
David F: We just went through an exercise of mapping requirements against SOAP.
Mark N: My view of the AM has changed. Scope of the AM is slippery. AM should be editors not designers.
Yves: AM could be used to fill the gap between requirements and SOAP.
Henrik: AMG should not be working on protocol design issues.
Glen: Should be working on higher level issues.
Marwan: Abstract, what is it? Should be written in a way so that it is easily understandable.
David F: We are overtime. How do get closure? We still have different points of view. What are its purposes? What are its objectives.
Glen: Should we summarize and take an opinion poll?
David F: We could burn lots of time quickly at the face to face. I would like a smaller group of people to formalize the issues around the AMG. I will summarize during the break - come and discuss with me if you want. Take a 20 minute break.
David F: Describe the AM. A Document that describes the model. The Spec will be one realization with corresponding syntax of that model. There could be other syntaxes. There is a certain amount of rigor applied to it. How to get there? From our charter, we say we map against SOAP. We will have a certain set of issues. We will answer those issues. We can use the Schema WG as an example process. A list of issues exist with the AM today, such as the Glossary work. I think that we can ask the AMG to come up with the finite list (the to do list). Now, as we go through the Issues list keep in mind that it might go back into the abstract model to work out and help come up with a solution. We are going to be adding things into it over time. If there are inconsistencies today, we will have to resolve those, but we can't just step into the space of "abstract model" and never leave - it will turn into an endless debate.
Scott: Action item is for the AMG to define a finite list of to do's with input from all from the the mailing lists.
David F: Yes. The Schema WG had smaller teams to answer certain questions and then came back to a larger body.
Action Item: AMG to create a To Do list and to solicit input from the mailing list. This will be a work list to help show where we are and and far we need to go.
We spent some time figuring out what to do (putting the issues up on the display, figuring out the process, etc). Are we going to answer and solve these issues or just assign owner. The current list is just open, closed, and unassigned. A Clarify. B Determine if it is an issue. C. Assign an owner to leave and audit trail. Owner will craft a response.
Some confusion about RPC vs correlation ID vs transaction ID. The issues is meant to be correlation ID not enterprise transaction manager ID. The requirement is 200. What does "enable support" mean in the requirement? Do it or allow it to be done. Somewhere in between? Define the header or allow it to be defined? Do we have to know what the issue-creator had in mind? Seems that SOAP is half-way there in defining this correlation ID. SOAP RPC yes. Other request/response no. SOAP says leave it up to the binding but it must be there. What is the SOAP binding role? Can look at it two ways. Requirements from SOAP envelope down to the underlying protocol and requirements from the underlying protocol up to the SOAP envelope. For example, in the HTTP binding you have request/response. In other words, SOAP does not really have r/r built in, but because of the binding to HTTP, it causes SOAP to be used in the r/r mode. We know that we will have something under XML Protocol, but then we have to accept the features of that transport. Requirement says we MUST do it. Do we still leave it to bindings? We must get in the business of correlation ID because we have even called out RPC as a common application, even in SOAP. However, it does not say you must correlate responses with requests. Everyone agrees it is a fine line and it is easy to fall to either side. Needs an owner and an issue. Sounds like the 1-way and 2-way discussion. These needs to be cleaned up and probably broken up. Henrik volunteered, but recognized that he is probably not impartial. Some agreement that we need another owner. Jeff Kay is the owner. We want some preliminary report by the next Teleconf (3/7).
Action Item: Jeff Kay to have a preliminary report by the next teleconf 3/7.
On future teleconfs we will review issues.
Don't feel like it really precludes intermediaries. What is the role of the underlying protocol. Same issue with ebXML. ebXML used SOAP action header. Is there an issues with multiple transports? For example TCP on one side and UDP on the other, or multi-cast on one side and uni-cast on the other. Writing the bridges is easier if the semantics are in the content, that is consistent access to the headers and such rather than rummage around in the transport details. One interpretation is that SOAP says that headers and bodies are the same with one difference, actors. Body says I'm the body and something else told you how to get me here. Headers are addressed, bodies are not. Issue is understanding not arguing a solution. We seem to be breaking the layering rules, but that is OK so we can more easily support multiple protocols. If someone thinks it is not an issue, please describe how it works. Example: http to smtp. Where does an intermediary get the final address from the hop address. SOAP solution is to provide a way for new headers to be invented. These headers can have all sorts of routing information. XML Protocol needs to focus on how new headers can be defined not on actually defining them. We need to make sure that we can come up with an envelope that can support many different types of semantic headers. Are intermediaries higher or lower? Depends. We need a balance between defining the minimal and then also staying ahead of what people will be defining as new headers. We could have some sort of best practice or normative guidelines document. Again, ebXML uses SOAP action as targets, but that always uses MIME. In XML Protocol we should either 1) move towards always MIME or 2) allow some sort of target URI at a layer above the binding. SOAP HTTP binding has it. SOAP SMTP does not have it. Is the issue clear? Seems to be. Is the URI really a route and a protocol binding or a destination independent of binding. Suggested clarification for the issue: The target URI is not represented in the envelope in any normative way. Some debate as to whether the clarification actually helps. Noah will deliver a response to the by email by 3/7 teleconf.
Action Item: Noah will deliver a response to the by email prior to the 3/7 teleconf for discussion.
SOAP does not require method name to be URI. SOAP requires a namespace qualified name. Are our requirements wrong? Is this a case where it should go to the AMG. Send back to issuer and state why this is an issue.
Action Item: Ray Whitmer will send back to the issuer.
The issue should be in the form of a statement rather than a question. Then the statement should state why it is an issue. Does anyone feel that this is an issue? We could treat this as a bigger issue: RPC is just a module, modules need a standard way to extend status and error message and faults. RPC might be a good module to have in the spec and it could serve as a template for other modules. What about othe modules that need the RPC module? OK, we already recognize the need for composable modules. Seems like we want to do that. Henrik argues against having RPC as a module. What is a module? Have we defined it yet? It seems like a block inside a message. RPC thing sure seems like a module. Brings up the issue that Eric P. started: What is this body thing? The header and body are really the same - why have two? We have blocks and blocks go in either Header (intermediaries) or body (final destination). One view is the symmetry of headers and bodies. Headers have actors. Body has 1andFinal as default. If I wanted to tell my cache manager, do I use a header? Are all headers RPCs? We need to be more crisp. One feeling is that the separation between headers and bodies is artificial. SOAP currently says method name and parameters in the "BODY". No need for this restriction. Would like to see RPC as a module of XP. Back to the issue: are the errors just numbers, a small set, extensions, etc. Decide if RPC is a module first and that will then inform us as how to solve this specific issue. Propose move the question to some other small group other than AMG. We do that not because AMG is overloaded, but because we want to have multi-processing going on to help us make progress. Propose we close this and open a sub group. No, we need to leave this open until we have an answer. Need volunteers.
Small group: Ray D Ray W Marwan Henrik Mark J Volker W. Ray W will own the issue. Need a response be 3/7. Name: RPC subgroup.
Action Item: RPC Subgroup to have a response by 3/7 teleconf.
Schedule says to move to Glossary, but Martin is the editor and he is not here and we have more issues, so we need to continue with issues. We need to think about schedules. W3C requires a "heartbeat" publication at least every 3 months. Candidates are: AM, Update requirements, Spec, etc. Proposed agenda change: 4:00 - 4:40 Issues. 4:40 - 5:00 schedule and scheduling another F2f.
Are we still looking at the IETF work? yes. Close the issue.
Action Item: David F will be owner and draft a response by 3/7 teleconf.
Make it a glossary/editorial issue.
Now, SOAP has a data model. It is a graph with simple data types, structs, multi-structs, and arrays. There should be difference between a data model and then an encoding. Better to split them out. In the current spec, the difference is subtle, but there. Yes, carefully adopt the terminology, and then move forward with the distinction. Should we move to pending or leave open with a note. Need to leave open, but there is a concern that we are leaving too many things open for future extensibility. This threatens interoperability. Implicitly we are essentially prioritizing these issues. We are leaving open, not to keep open indefinitely, but to wait until more items get resolved. Still see too much in extensibility. Note that many things are putting interoperability at risk such as RPC as a module rather than as core. Does module imply optional in peoples minds? yes and no. Some no. What does mandatory mean? If RPC then RPC module is different than all messages are RPC. Other things we don't have for RPC: language binding, interface definition, conventions, etc. Is there a difference between "THE" XMLP RPC module and "A" XMLP RPC module. Relates to conformance. There is a big difference between Language bindings (SOAP does not do) and on the wire protocol definitions (SOAP does do). Are people using SOAP with or without the SOAP encoding? SOAP only mandates the envelope. If we have different bindings on the client side and both are SOAP compliant, can I get a different result? Should we leave open, close, or introduce a new item. Can we assign an owner? Dave Ezell will own. Have something by 3/7.
Action Item: David Ezell will be owner and draft a response by 3/7 teleconf.
David F put up the schedule slide again. Need to publish by mid March. Obvious candidates Req, AM, Spec, or Glossary if separate. The spec is technically publishable, but some don't think it would be a good idea. We could make it a better proposal, but taking the spec and rolling the issue list (a snapshot of the issues) into it to show the world that there are still many issues. AM is a potential - we all seem to agree that it will be published at some time. Propose reconcile the glossary and publish as soon as possible. Taking it out of the requirements means to put into a document that will be published at the same time. Taking it out of a WD doesn't necessarily remove it from the public view. The heartbeat publications were set up when W3C WG's were less public, but we are very public. So, publications seem to imply more agreement rather than public face. Hence, we need to be careful about what we publish. What is the suggestion with the glossary? Do we all think that it needs to move to the AM? No clear answer and no clear WG feeling emerging. Reconciliation should happen independent of how it gets published. What is the upfront time to get it published now that it is already a WD? A few days only. DF recognizes some consensus in doing the work in the Requirements Doc and republishing it for our heartbeat req. Verbal for for yes. WG had decided we will publish a new Req Doc with an updated glossary. Henrik has already update the editor's copy on the w3c web site. By 3/7 all review. By 3/14 need the final document. Henrik points out that there are currently no outstanding issues against it. Martin owns the glossary. Stuart will meet with Gudgin to resolve the conflicts.
Action Item: Report from Stuart and Martin on glossary work by 3/7.
So we are not ready to publish the AM and the Spec, but what is our criteria for knowing we are ready to publish?
Admin note: We are at time (5:00), propose 10 minutes. All OK with that.
Should we go off the issues list? Might not be the right metric. We could prioritize the issues and then say publish when all higher priorities are done? David F will break up issues into two priority groups. Do not want to publish the docs with issue lists? Some think that it is OK. Some concerned about the risk of the world thinking we are just rubber stamping SOAP, but we need to be very careful in how we couch it. We could just publish the issues list. We could organize the issues into buckets and save a very small number in the top bucket and frame with questions to the public. From the point of view of appearances, we already have SOAP as a public document. It would give the wrong impression to publish with only boiler plate changes. So, agreement that we will not publish the Spec until there are more changes. The AM we will process on ideas for publishing once we have the issues list.
When is the next F2F?
Charter scheduled one for June. We need one 3 months out in Sep, but for the US attendees, the first Week in Sep is Labor Day, Monday the 3rd. An option is the 10th or 11th. June is a 3 day. Should we make the Sep a 3 day? Probably yes, reserve the right to scale back if needed. 1st F2F east coast US. 2nd F2F west coast US. 3rd east coast US. 4th will be Europe. Seems like the likely candidates are west coast US or Asia.
Take a vote, multiple votes allowed, question "Which locations would I prefer?"
So, west coast 10, 11, 12 of Sept. Wait, what about Monday meeting cutting into weekends. We took a preference vote for Mon, Tue, Wed preference vs Tue Wed Thu preference.
Decided to schedule the meeting for the tue/wed/thu.
Several volunteers for hosting on the west coast:
Action Item: Host vounteers will have 2 weeks to investigate and come back with a firm proposal.
|
http://www.w3.org/2000/xp/Group/1/03/f2f-pminutes
|
CC-MAIN-2013-48
|
refinedweb
| 6,012
| 66.23
|
Devel::Profiler - a Perl profiler compatible with dprofpp
To profile a Perl script, run it with a command-line like:
$ perl -MDevel::Profiler script.pl
Or add a line using Devel::Profiler anywhere your script:
use Devel::Profiler;
Use the script as usual and perform the operations you want to profile. Then run
dprofpp to analyze the generated file (called
tmon.out):
$ dprofpp
See the
dprofpp man page for details on examining the output.
For Apache/mod_perl profiling see the Devel::Profiler::Apache module included with Devel::Profiler. then there is no reason to use this module.
I created this module because I desperately needed a profiler to optimize a large Apache/mod_perl application. Devel::DProf, however, insisted on seg-faulting on every request. I spent many days trying to fix Devel::DProf, but eventually I had to admit that I wasn't going to be able to do it. Devel::DProf's virtuoso creator, Ilya Zakharevich, was unable to spend the time to fix it. Game over.
My next stop brought me to Devel::AutoProfiler by Greg London. This module is a pure-Perl profiler. Reading the code convinced me that it was possible to write profiler without going the route that led to Devel::DProf's extremely difficult code.
Devel::AutoProfiler is a good module but I found several problems. First, Devel::AutoProfiler doesn't output data in the format used by
dprofpp. I like
dprofpp - it has every option I want and the
tmon.out format it supports is well designed. In contrast, Devel::AutoProfiler stores its profiling data in memory and then dumps its data to STDOUT all in one go. As a result, Devel::AutoProfiler is, potentially, a heavy user of memory. Finally, Devel::AutoProfiler has some (seemingly) arbitrary limitations; for example, it won't profile subroutines that begins with "__".
Thus, Devel::Profiler was born - an attempt to create a dprofpp-compatible profiler that avoids Devel::DProf's most debilitating bugs.
The simplest way to use Devel::Profiler is to add it on the command-line before a script to profile:
perl -MDevel::Profiler script.pl
However, if you want to modify the way Devel::Profiler works you'll need to add a line to your script. This allows you to specify options that control Devel::Profiler's behavior. For example, this sets the internal buffer size to 1024 bytes.
use Devel::Profiler buffer_size => 1024;
The available options are listed in the OPTIONS section below.
The available options are:
This option controls the name of the output file. By default this is "tmon.out" and will be placed in the current directory. If you change this option then you'll have to specify it on the command-line to
dprofpp. For example, if you use this line to invoke Devel::Profiler:
use Devel::Profiler output_file => "profiler.out";
Then you'll need to invoke
dprofpp like this:
dprofpp profiler.out
Devel::Profiler uses an internal buffer to avoid writing to the disk before and after every subroutine call, which would greatly slow down your program. The default buffer_size is 64k which should be large enough for most uses. If you set this value to 0 then Devel::Profiler will write data to disk as soon as it is available.
Devel::Profiler can skip profiling subroutines in a configurable list of packages. The default list is:
[qw(UNIVERSAL Time::HiRes B Carp Exporter Cwd Config CORE DynaLoader XSLoader AutoLoader)]
You can specify your own array-ref of packages to avoid using this option. Note that by using this option you're overwriting the list not adding to it. As a result you'll generally want to include at many of the packages listed above in your list.
Devel::Profiler never profiles pragmatic modules which are in all lower-case.
In addition the DB package is always skipped since trying to instrument the subroutines in DB will crash Perl.
Finally, Devel::Profiler never profiles pragmatic modules which it detects by their being entirely lower-case. Example of pragmatic modules you've probably heard of are "strict", "warnings", etc.
This option allows you to handle package selection more flexibly by allowing you to construct a callback that will be used to control which packages are profiled. When the callback returns true the package will be profiled, false and it will not. A false return will also inhibit profiling of child packages, so be sure to allow 'main'!
For example, to never profile packages in the Apache namespace you would write:
use Devel::Profiler package_filter => sub { my $pkg = shift; return 0 if $pkg =~ /^Apache/; return 1; };
The callback is consider after consulting bad_pkgs, so you will still need to modify bad_pkgs if you intend to profile a default member of that list.
If you pass an array-ref to package_filter you can specify a list of filters. These will be consulted in-order with the first to return 0 causing the package to be discarded, like a short-circuiting "and" operator.
You can specify an array-ref containing a list of subs not to profile. There are no items in this list by default. Be sure to specify the fully-qualified name - i.e. "Time::HiRes::time" not just "time".
The sub_filter option allows you to specify one or more callbacks to be used to decide whether to profile a subroutine or not. The callbacks will recieve two parameters - the package name and the subroutine name.
For example, to avoid wrapping all upper-case subroutines:
use Devel::Profiler sub_filter => sub { my ($pkg, $sub) = @_; return 0 if $sub =~ /^[A-Z_]+$/; return 1; };
By default Devel::Profiler will override Perl's builtin caller(). The overriden caller() will ignore the frames generated by Devel::Profiler and keep code that depends on caller() working under the profiler. Set this option to 0 to inhibit this behavior. Be aware that this is likely to break many modules, particularly ones that implement their own exporting.
This variable sets the number of ticks-per-second in the timing routines. By default it is set to 1000, which should be good enough to capture the accuracy of most times() implementations without spamming the output file with timestamps. Setting this too low will reduce the accuracy of your data. In general you should not need to change this setting.
This profiler has a number of inherent weaknesses that should be acknowledged. Here they are:
times()function and as a result it won't work on systems that don't have
times().
My todo list - feel free to send me patches for any of these!
I know of no bugs aside from the caveats listed above. If you find one, please file a bug report at:
Alternately you can email me directly at sam@tregar.com. Please include the version of the module and a complete test case that demonstrates the bug.
I learned a great deal from the original Perl profiler, Devel::DProf by Ilya Zakharevich. It provided the design for the output format as well as introducing me to many useful techniques.
Devel::AutoProfiler by Greg London proved to me that a pure-Perl profiler was possible and that it need not rely on the buggy DB facilities. Without seeing this module I probably would have given up on the project entirely.
In addition, the following people have contributed bug reports, feature suggestions and/or code patches:
Automated Perl Test Account Andreas Marcel Riechert Simon Rosenthal Jasper Zhao
Thanks!
This program is free software; you can redistribute it and/or modify it under the same terms as Perl 5 itself.
Sam Tregar <sam@tregar.com>
Devel::DProf, Devel::AutoProfiler
|
http://search.cpan.org/~samtregar/Devel-Profiler-0.04/lib/Devel/Profiler.pm
|
crawl-002
|
refinedweb
| 1,273
| 55.95
|
As you have seen from the web service consumer using SOAP, the data being passed back and forth as is structured XML inside the body of the HTTP package. In particular it has the following format:
<Envelope> <Header> < . . . > </Header> <Body> < . . . > </Body> </Envelope>
We have seen how to pass data from the client to the web service through the Body of the Envelope. In this section, we show how to use the Header of the Envelope.
Through this optional header node, the developers can pass information that does not relate to any particular web method or information that is common to all web methods. Of course, you can pass information that does not related to any particular web method in a method call itself, such as InitiateService(param) enforcing a usage rule that this method has to be called first. This is not an elegant solution. Just add the param of the InitiateService to the header. On the other hand, if all web methods of your service require a common parameter, wouldn't it be nice if you could just set up the header and didn't have to worry about this parameter for each of the web methods. Examples of header information are a security token, a payment token, priority, and so on; there is no reason to pass these pieces of common information via every web method.
Once you construct your web service to use a SOAP header, the WSDL will instruct the client that a header node is in place so that a web service client knows how to set up the header information before making a call to the web service methods. The following example should provide some clarification:
<%@ WebService Language="C#" Class="TestHeader.PayWS" %> namespace TestHeader { using System; using System.Web; using System.Web.Services; using System.Web.Services.Protocols; public class Payment : SoapHeader { public string CreditCardNumber; } [WebService(Namespace=" public class PayWS : WebService { public Payment clientPayment; [WebMethod, SoapHeader("clientPayment")] public string Task1( ) { return string.Format("Task1 performed. " + "Charging $25,000.00 on credit card: {0}", clientPayment.CreditCardNumber); } [WebMethod, SoapHeader("clientPayment")] public string Task2( ) { return string.Format("Task2 performed. " + "Charging $4,500.00 on credit card: {0}", clientPayment.CreditCardNumber); } } }
In this example, we create a web service with two methods, Task1 and Task2. Both of these methods use the Payment object in the soap header.
The SOAP request and response format for Task1 follows:
POST /SOAPHeader/PayWS.asmx HTTP/1.1 Host: localhost Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: " <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns: </soap:Body> </soap:Envelope> HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8 Content-Length: length <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi=" xmlns:xsd=" xmlns:soap=" <soap:Body> <Task1Response xmlns=" <Task1Result>string</Task1Result> </Task1Response> </soap:Body> </soap:Envelope>
This is different than the soap envelope we've seen for PubsWS where there was no SOAP header. In this case, the header is an object of type Payment and this Payment class has a string property called CreditCardNumber.
Our example expects the clients to this web service to pass in the Payment object. The following is the client code. Of course, you will have to generate the proxy class for the PayWS web service using wsdl.exe and compile this proxy along with the client code:
public class ClientWS { public static void Main( ) { // Create a proxy PayWS oProxy = new PayWS( ); // Create the payment header. Payment pmt = new Payment( ); pmt.CreditCardNumber = "1234567890123456"; // Attach the payment header to the proxy. oProxy.PaymentValue = pmt; // Call Task1. System.Console.WriteLine(oProxy.Task1( )); // Call Task2. System.Console.WriteLine(oProxy.Task2( )); } }
The output of the client:
Task1 performed. Charging $25,000.00 on credit card: 1234567890123456 Task2 performed. Charging $4,500.00 on credit card: 1234567890123456
The above is a trivial usage of SOAP header. In reality, the SOAP headers are probably not as simple as our Payment class.
Microsoft, IBM, and a number of other companies have been working on the Global XML web services Architecture (GXA) that defines a framework on how to build standardized web services incorporating security, reliability, referral, routing, transaction, and so on. A number of GXA specifications use SOAP Header as the means for this infrastructure.
|
https://etutorials.org/Programming/.NET+Framework+Essentials/Chapter+6.+Web+Services/6.6+SOAP+Header+in+Web+Services/
|
CC-MAIN-2022-21
|
refinedweb
| 710
| 55.84
|
This book is divided into seven chapters, each of which is briefly described here:
Contains a series of introductory hacks, including an overview of what an XML document should look like, how to display an XML document in a browser, how to style an XML document with CSS, and how to use command-line Java applications to process XML.
Teaches you how to edit XML with a variety of editors, including Vim, Emacs, <oXygen/>, and Microsoft Office 2003 applications. Among other things, shows you how to convert a plain text file to XML with xmlspy, translate CSV to XML, and convert HTML to XHTML with HTML Tidy.
Explores many ways that you can use XSLT and other tools to transform XML into CSV, transform an iTunes library (plist) file into HTML, transform XML documents with grep and sed, and generate SVG with XSLT.
Helps you get acquainted with namespaces and RDDL, and describes how to use common XML vocabularies and frameworks such as XHTML, DocBook, RDDL, and RDF in the form of FOAF.
Covers the creation of valid XML using DTDs, XML Schema, RELAX NG, and Schematron. It also explains how to generate schemas from instances, how to generate instances from schemas, and how to convert a schema from one schema language to another.
Teaches you how to subscribe to RSS feeds with news readers; create RSS 0.91, RSS 1.0, RSS 2.0, and Atom documents; and generate RSS from Google queries and with Movable Type templates.
Shows you how to perform XML tasks in an Ant pipeline, how to use Cocoon, and how to process XML documents using DOM, SAX, Genx, and the facilities of C#'s System.Xml namespace, among others.
|
https://etutorials.org/XML/xml+hacks/Preface/How+This+Book+Is+Organized/
|
CC-MAIN-2021-31
|
refinedweb
| 285
| 66.57
|
:
- core - a compact module defining basic data structures, including the dense multi-dimensional array
Mat.
The further chapters of the document describe functionality of each module. But first, make sure to get familiar with the common API concepts used thoroughly in the library.
cvNamespace¶
All the OpenCV classes and functions are placed into the
cv namespace. Therefore, to access this functionality from your code, use the
cv:: specifier or
using namespace cv; directive:
#include "opencv2/core/core.hpp" ... cv::Mat H = cv::findHomography(points1, points2, CV_RANSAC, 5); ...
or
#include "opencv2/core();++ TR1. So, instead of using plain pointers:
T* ptr = new T(...);
you can use:
Ptr<T> ptr = new T(...);
That is,
Ptr<T> ptr:
#include "cv.h" #include "highgui.h" using namespace cv; int main(int, char**) { VideoCapture cap(0); if(!cap.isOpened()) return -1; Mat frame, edges; namedWindow("edges",1); for(;;) { cap >> frame; cvtColor(frame, edges, CV_BGR2GRAY); GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5); Canny(edges, edges, 0, 30, 3); imshow("edges", edges); if(waitKey(30) >= 0) break; } return 0; }
CV:constant,:
CV_8UC1...
CV_64FC4constants (for a number of channels from 1 to 4)
CV_8UC(n)...
CV_64FC(n)or
CV_MAKETYPE(CV_8U, n)...
CV_MAKETYPE(CV_64F, n)macros when the number of channels is more than 4 or unknown at the compilation time.
Note::
try { ... // call OpenCV } catch( cv::Exception& e ) { const char* err_msg = e.what(); std::cout << "exception caught: " << err_msg << std::endl; }.
|
https://docs.opencv.org/2.4/modules/core/doc/intro.html
|
CC-MAIN-2019-22
|
refinedweb
| 235
| 61.02
|
Introduction
Helm charts are one of the best practices for building efficient clusters in Kubernetes. It is a form of packaging that uses a collection of Kubernetes resources. Helm charts use those resources to define an application.
Helm charts use a template approach to deploy applications. Templates give structure to projects and are suitable for any type of application.
This article provides step-by-step instructions to create and deploy a Helm chart.
Prerequisites
- Access to a CLI
- Minikube cluster installed and configured. (For assistance, follow our guides How to Install Minikube on Ubuntu and How to Install Minikube on CentOS.)
- Helm installed and configured.
Note: To confirm Helm installed properly, run
which helm in the terminal. The output should return a path to Helm.
Create Helm Chart
Creating a Helm chart involves creating the chart itself, configuring the image pull policy, and specifying additional details in the values.yaml file.
Step 1: Create a New Helm Chart
1. To create a new Helm chart, use:
helm create <chart name>
For example:
helm create phoenixnap
2. Using the ls command, list the chart structure:
ls <chart name>
The Helm chart directory contains:
- Directory charts – Used for adding dependent charts. Empty by default.
- Directory templates – Configuration files that deploy in the cluster.
- YAML file – Outline of the Helm chart structure.
- YAML file – Formatting information for configuring the chart.
Step 2: Configure Helm Chart Image Pull Policy
1. Open the values.yaml file in a text editor. Locate the image values:
There are three possible values for the pullPolicy:
IfNotPresent– Downloads a new version of the image if one does not exist in the cluster.
Always– Pulls the image on every restart or deployment.
Latest– Pulls the most up-to-date version available.
2. Change the image pullPolicy from
IfNotPresent to
Always:
Step 3: Helm Chart Name Override
To override the chart name in the values.yaml file, add values to the nameOverride and fullnameOverride:
For example:
Overriding the Helm chart name ensures configuration files also change.
Step 4: Specify Service Account Name
The service account name for the Helm chart generates when you run the cluster. However, it is good practice to set it manually.
The service account name makes sure the application is directly associated with a controlled user in the chart.
1. Locate the serviceAccount value in the values.yaml file:
2. Specify the name of the service account:
Step 5: Change Networking Service Type
The recommended networking service type for Minikube is
NodePort.
1. To change the networking service type, locate the service value:
2. Change the type from
ClusterIP to
NodePort:
Deploy Helm Chart
After configuring the values.yaml file, check the status of your Minikube cluster and deploy the application using Helm commands.
Step 1: Check minikube Status
If Minikube isn’t running, the install Helm chart step returns an error.
1. Check Minikube status with:
minikube status
The status shows up as Running.
2. If the status shows Stopped, run:
minikube start
The output shows Done and the status changes to Running.
Step 2: Install the Helm Chart
Install the Helm chart using the
helm install command:
helm install <full name override> <chart name>/ --values <chart name>/values.yaml
For example:
helm install phoenix-chart phoenixnap/ --values phoenixnap/values.yaml
The
helm install command deploys the app. The next steps are printed in the NOTES section of the output.
Step 3: Export the Pod Node Port and IP Address
1. Copy the two
export commands from the
helm install output.
2. Run the commands to get the Pod node port and IP address:
Step 4: View the Deployed Application
1. Copy and paste the
echo command and run it in the terminal to print the IP address and port:
2. Copy the link and paste it into your browser, or press CTRL+click to view the deployed application:
Note: Learn how to delete a Helm deployment and namespace to get rid of unwanted or multiple copies of Helm deployments.
Conclusion
After following the outlined step-by-step instructions, you have a Helm chart created, set up, and deployed on a web server. Helm charts simplify application deployment on a Kubernetes cluster.
Now that you have created a Helm chart, learn How to Pull And Push Helm Charts.
Add Helm chart repositories to create more complex applications, learn how to use environment variables with Helm, or learn about other Kubernetes tools next.
|
https://phoenixnap.es/kb/create-helm-chart
|
CC-MAIN-2022-21
|
refinedweb
| 736
| 65.93
|
Difference between revisions of "Changelog"
Revision as of 05:30, 29 September 2021
The midas git repository uses tags to denote specific releases. Tags are of the format
midas-YEAR-MONTH-SUFFIX, e.g.
midas-2019-06-b. This page details the major changes in each release, and highlights how to update client code to adapt to any changes that are not backwards-compatible.
2020-12
Release
midas-2020-12-a
Relevant elog entry - 2089 (midas-2020-12-a)
Improvements
- New ODB variable
/Experiment/Enable soundcan be used to globally prevent mhttpd from playing sounds.
- Lazylogger now supports writing data over SFTP.
modbvalueelements on custom pages now support an
onload()callback as well as
onchange(). Most elements now also support a
data-validatecallback.
- Custom pages can now tie a
selectdrop-down box to an ODB value using
modbselect.
- Ability to choose whether the code or the current ODB values take precendence for the "Common" settings of an equipment when starting a frontend. See elog thread 2014 for more details, and the "Upgrade guide" below for instructions.
- Minor improvements to mdump program - support for 64-bit data types and ability to load larger events if needed.
- Minor improvements to History plots and Buffers webpage.
- Various bug fixes
Upgrade guide
Updating midas
cd $MIDASSYS git pull git checkout midas-2020-12-a git submodule update cd build make install
Updating experiment frontends
Any frontends that are written in the MFE framework need to have an extra variable declared. This dictates whether the "Common" settings for an equipment get overwritten by the defaults specified in the equipment listing part of the code when the frontend is started. The old behaviour was that ODB values are never overwritten by the values in the code.
To keep the old behaviour, add this line to your frontend code (somewhere near the equipment listing, in the global scope):
BOOL equipment_common_overwrite = false;
If you do not add a line like this, your compiler will complain that the variable is undefined.
2020-08
Release
midas-2020-08-a
Relevant elog entry - 1987 (midas-2020-08-a)
Improvements
- C++ ODB interface (think of it like a "magic" std::map that syncs changes with the ODB)
- Image history for logging webcam images and displaying timelapses
- Much improved history plots
- Sequencer page is now javascript-based and more responsive
- UTF-8 clean ODB (complains if any TID_STRING is invalid UTF-8)
- mhttpd updated to mongoose 6.16 with much improved mulththreading
- mhttpd updated to use MBEDTLS in preference to problematic OpenSSL
- MidasConfig.cmake contributed by Mathieu Guigue
- Various bug fixes
Upgrade guide
cd $MIDASSYS git pull git checkout midas-2020-08-a git submodule update cd build make install
An ODB key has been deprecated / replaced, and you will be warned about this when you restart mlogger:
/Logger/Message filehas been replaced by
/Logger/Message dirand
/Logger/Message file date format. If your "Message file" used to just say "midas.log", then you can just delete the "Message file" key and leave the new entries blank. If you had a more customised value, then read the linked documentation to see how to use the new keys.
2020-03
Release
midas-2020-03-a
Relevant elog entry - 1854 (midas-2020-03-a)
Improvements
- Python library wrapping the C++ library, with framework for writing clients and frontends.
- New ODB tree /Webserver that gathers all mhttpd settings in one place, including the new ability for mhttpd to act as a proxy to other web servers
- New javascript-based sequencer page (via the "NewSequencer" link in the mhttpd menu)
- Various bug fixes
Upgrade guide
cd $MIDASSYS git pull git checkout midas-2020-08-a cd build make install
Then, restart mhttpd and configure the new /Webserver ODB directory to your liking.
2019-09
Releases
midas-2019-09-a through
midas-2019-09-i
Relevant elog entries - 1706 (midas-2019-09), 1747 (midas-2019-09-e), 1749 (midas-2019-09-g) and 1750 (midas-2019-09-i)
Improvements
- New javascript-based history plots (old image-based plots are available via the "OldHistory" link in the mhttpd menu)
- Reduced memory and CPU usage of mhttpd web pages
- Latest version of mxml
- Support for MySQL 8+, which removed the my_global.h header
- Various bug fixes
Bug fixes
- 187 - drivers/bus/rs232.cxx does not compile after transition to C++
- 186 - drivers/bus/tcpip.cxx does not compile after transition to C++
- 130 - Mysql::Prepare() doesn't recover from MySQL connection going away
- 188 - Shouldn't memset structs containing std::string
Upgrade guide
If updating from a version before
2019-06, take special note of the upgrade instructions for that version - you will have to update your client code to adapt to C++.
# Grab the new code cd $MIDASSYS git checkout develop git pull git checkout midas-2019-09-a git pull # Finally, update mxml by updating the submodules git submodule update --recursive cd build cmake .. make make install
2019-06
Releases
midas-2019-06-a,
midas-2019-06-b.
Relevant elog entries - 1564 (midas-2019-06 with cmake and c++) and 1526 (How to convert C midas frontends to C++).
Improvements
- Migration from C to C++. You will have to make changes to your clients (see upgrade guide below).
- Ability to compile using cmake/cmake3 as well as make. To use cmake, you can either build manually with
mkdir build; cd build; cmake ..; make; make install, or use the handy shortcut
make cmake3. Note that the location where libraries and executables are built has changed - the OS-specific subdirectories (e.g.
/linux/lib) has been replaced by a common
/liband
/bin.
- mxml and mscb are now included as git submodules. See the "upgrade guide" instructions for how to checkout the latest version of these modules.
Bug fixes
- 183 - mhttpd could write garbage to midas.log, which would break webpages that try to show messages
- 181 - race condition between msequencer and mhttpd could make sequencer webpage buttons useless
- 171 - make RPC calls more thread-safe
- 180 - addition of strcomb1 which is thread-safe; strcomb will be deprecated
- 179 - don't try to send an ODB dump to mlogger's ROOT output, as it would crash mlogger
- 184 - don't trap the user in an error loop if elog doesn't exist
- Minor fixes and stability improvements
Known issues
- cmake/cmake3 - ZLIB support is not detected, so gzipped files cannot be written by the logger. Will be fixed in the next release, or you can update midas to a commit after August 2.
- mxml can segfault due to a double free. Will be fixed in the next release, or you can update mxml to commit f6fc49d:
cd mxml; git checkout f6fc49d; cd ..and re-compile midas.
Upgrade guide
Updating midas
# Grab the new code cd $MIDASSYS git checkout develop git pull git checkout midas-2019-06-b git pull git submodule update --init # this will checkout correct versions of mxml and mscb # Tidy up the old build - be sure to delete the deprecated linux directory! make clean make cclean rm -rf linux/bin rm -rf linux/lib rmdir linux # Build the new midas mkdir build cd build cmake .. make make install
If you have a script that sets up environment variables, you should change
PATH from
$MIDASSYS/linux/bin to
$MIDASSYS/bin.
You should then restart
mserver,
mlogger,
mhttpd, and any other midas programs that you run.
Cleanup old packages that are now submodules
As mxml and mscb are included as submodules now, you can remove the external packages that were downloaded previously (assuming they aren't used by other experiments on the same machine). E.g.
rm -r $HOME/packages/mxml # new location $MIDASSYS/mxml rm -r $HOME/packages/mscb # new location $MIDASSYS/mscb
Update experiment frontends
The migration from C to C++ is one of the biggest user-facing changes in midas for a long time. Unfortunately it requires manual work from experimenters to update their client code:
- Update your Makefile
- Update library search path for code that links against libmidas or mfe.o (
$MIDASSYS/linux/libbecomes
$MIDASSYS/lib)
- If you reference mxml in your Makefile, change the include path to
$MIDASSYS/mxml
- If you explicitly have the compiler as
gcc, change it to
g++
- Update frontend code to use mfe.h and build as C++
- Add
#include "mfe.h"after including
midas.h
- Remove
extern Cbrackets around mfe-related code. Ideally there should be no
extern Cbrackets anywhere.
- Ensure that
frontend_nameand
frontend_file_nameare
const char*rather than
char*.
- If you define your own global
HNDLE hDB, change it to
extern HNDLE hDBto pick up the one provided by mfe.
- Ensure that
poll_eventand
interrupt_configure()use
INTrather than
INT[]for the
sourceargument.
- If you use
extern int frontend_index, change it to use the
get_frontend_index()function from mfe.h instead.
- Ensure that the last argument to
bk_createis cast to
(void**)
- Try to compile, and fix any more compilation errors. Examples may include:
- Duplicate or mismatched declarations of functions defined in mfe.h (fix the mismatched declarations in your code)
bool debugcolliding with declaration in mfe.h (suggest to rename the variable in the client)
- Return value of
malloc()etc needs to be cast to the correct data type (e.g.
char* s = (char*)malloc(...))
An example diff of a frontend is:
#include "midas.h" + #include "mfe.h" #include "msystem.h" #include "utils1.h" - #ifdef __cplusplus - extern "C" { - #endif - char *frontend_name = FE_NAME; + const char *frontend_name = FE_NAME; - HNDLE hDB; + extern HNDLE hDB; - #ifdef __cplusplus - } - #endif - extern "C" int frontend_index; INT frontend_init() { - printf("We are running as frontend index %d", frontend_index); + printf("We are running as frontend index %d", get_frontend_index()); return SUCCESS; } - extern "C" INT interrupt_configure(INT cmd, INT[] source, PTYPE adr) { + INT interrupt_configure(INT cmd, INT source, PTYPE adr) { return 0; }
2019-05
Releases
midas-2019-05-cxx and
midas-2019-05-before-cmake were development tags that are not intended to be used more widely.
2019-03
Releases
midas-2019-03-f through
midas-2019-03-h.
Relevant elog entries: 1513 (midas-2019-03-f), 1530 (midas-2019-03-g), 1543 (midas-2019-03-h).
Improvements
- ODB and event buffer access is now fully thread-safe
- Stability improvements
Bug fixes
- 99 - Protect against recursive calls to db_lock_database() that could cause corruption
- 173 - Fix redirection from old URL scheme to new scheme
- 167 - Avoid RPC timeouts for clients that have already been closed
- 115 - Better error messages if events are too big
Upgrade guide
cd $MIDASSYS git pull git checkout midas-2019-03-h make clean make
2019-02
Releases
midas-2019-02-a and
midas-2019-02-b.
Improvements
- Format of ODB dumps in midas files can now be specified using new ODB key
/Logger/Channels/<N>/Settings/ODB dump format. Note that the default dump format is now json; if your analysis tools parse the ODB dump at the start/end of your midas files, you may want to change the setting to
xml.
- Location of most recent end-of-run ODB dump can be set using new ODB key
/Logger/ODB Last Dump File.
- New
mhttpd_exec_script()javascript function to execute a custom script.
- Rationalised and more secure URL scheme for webpages served by mhttpd
- Conversion of many midas pages to more responsive design
- Stability and usability improvements
Bug fixes
- 58 - Reduce data rate of test programs
- 153 - Simplified creation of webpage aliases in the /Alias tree
- 136 - Fix bug in db_validate_open_records() and 'sor' command in odbedit tht could cause an abort
Upgrade guide
cd $MIDASSYS git pull git checkout midas-2019-02-b make clean make
2018-12
Release
midas-2018-12-a.
2017-10
Releases
midas-2017-10-a.
2017-07
Releases
midas-2017-07-a through
midas-2017-07-c.
|
https://daq00.triumf.ca/MidasWiki/index.php?title=Changelog&diff=prev&oldid=3081
|
CC-MAIN-2022-27
|
refinedweb
| 1,941
| 58.62
|
The QModemCallProvider class implements a mechanism for AT-based phone call providers to hook into the telephony system. More...
#include <QModemCallProvider>
Inherits QPhoneCallProvider.
The QModemCallProvider class implements a mechanism for AT-based phone call providers to hook into the telephony system.
This class provides a number of methods, such as dialVoiceCommand(), releaseCallCommand(), putOnHoldCommand(), etc, that can be used to customize the AT commands that are used by QModemCall for specific operations. Modem vendor plug-ins override these methods in their own phone call provider.
Client applications should use QPhoneCall and QPhoneCallManager to make and receive phone calls. The QModemCallProvider class is intended for the phone server.
QModemCall instances are created by the QModemCallProvider::create() function. If a modem vendor plug-in needs to change some of the functionality in this class, they should do the following:
See the documentation for QModemCall for more information.
See also QModemCall, QModemDataCall, QPhoneCallProvider, and QPhoneCallImpl.
This enum defines the behavior of the ATD modem command when dialing voice calls.
Constructs a new AT-based phone call provider for service.
Destroys this AT-based phone call provider and all QPhoneCallImpl instances associated with it.
Aborts an ATD dial command for the call modemIdentifier. The scope parameter is passed from QModemCall::hangup().
The default implementation calls atchat()->abortDial(), followed by either AT+CHLD=1 or AT+CHLD=1n depending upon the value of scope.
See also QAtChat::abortDial() and QModemCall::hangup().
Returns the AT command to use to accept an incoming call. If otherActiveCalls is true, then there are other active calls within the system. The default implementation returns AT+CHLD=2 if otherActiveCalls is true, or ATA otherwise.
See also setBusyCommand() and QModemCall::accept().
Returns the AT command to use to activate the call modemIdentifier. If otherActiveCalls is true, then there are other active calls within this system. The default implementation returns AT+CHLD=2modemIdentifier if otherActiveCalls is true, or AT+CHLD=2 if the call being activated is the only one in the system.
See also activateHeldCallsCommand(), putOnHoldCommand(), and QModemCall::activate().
Returns the AT command to use to place the currently active calls on hold and activate the held calls. The default implementation returns AT+CHLD=2.
See also activateCallCommand(), putOnHoldCommand(), and QModemCall::activate().
Returns the AT chat handler for this modem call provider.
Returns the behavior of the ATD modem command when dialing voice calls.
The defined behavior in 3GPP TS 27.007 is for the ATD command to immediately return to command mode when it has a trailing semi-colon (i.e. ATDnumber;), even if the call has not yet connected. Not all modems support this correctly.
This function can be used to alter how the system uses the ATD command when dialing voice calls to accommodate non-standard modems.
AtdOkIsConnect indicates that ATD blocks until the call has connected before reporting OK. AtdOkIsDialing and AtdOkIsDialingWithStatus indicates that the modem obeys 3GPP TS 27.007 and returns immediately to command mode.
In the case of AtdOkIsDialing, the AT+CLCC command is polled to determine when the call transitions from dialing to connected. The modem vendor plug-in can avoid polling if it has call status tracking. In that case it should return AtdOkIsDialingWithStatus from this function and call setState() once the transition occurs.
AtdUnknown indicates that it is not known which of these modes is supported by the modem. A separate AT+CLCC command is used to determine what state the call is actually in once ATD reports OK.
The default implementation returns AtdUnknown.
See also QModemCallProvider::AtdBehavior.
Returns the phone call associated with modem identifier id. Returns null if there is no such call.
Returns the AT command to use to deflect the incoming call to number. The default implementation returns AT+CTFR=number.
See also QModemCall::transfer().
Returns the AT command to use to dial the supplementary service specified by options. The default implementation returns ATDnumber.
See also dialVoiceCommand().
Returns the AT command to use to dial the voice call specified by options. The default implementation returns ATDnumber[i][g]; where the i flag indicates the caller ID disposition and the g flag indicates the closed user group disposition.
See also QDialOptions::number(), QDialOptions::callerId(), QDialOptions::closedUserGroup(), and dialServiceCommand().
Returns the phone call object associated with the current dialing or alerting call. Returns null if there is no such call.
See also incomingCall().
Returns the AT commands to use to set up a GPRS session. The returned list should not contain AT+CGDCONT and ATD*99***1# as these commands are automatically inserted. The default Implementation returns AT+CGATT=1.
Called by the modem vendor plug-in to indicate that it in determined that call was hung up remotely. This is usually called in response to a proprietary unsolicited result code.
If call is null, then the modem vendor plug-in has determined that the active call has hung up but it was unable to be any more precise than that.
See also QModemCall::hangup().
Returns true if the modem automatically re-sends RING every few seconds while a call is incoming, and stops sending RING if the caller hangs up. Returns false if the modem does not resend RING every few seconds and instead uses some other mechanism to notify Qt Extended that a remote hangup has occurred. The default return value is true.
See also ringing().
Returns the phone call object associated with the current incoming call. Returns null if there is no incoming call.
See also dialingCall().
Returns the AT command to use to join the active and held calls into a single multi-party conversation. If detachSubscriber is true, then detach the local party from the conversation after joining the calls. Returns AT+CHLD=4 if detachSubscriber is true, or AT+CHLD=3 otherwise.
See also QModemCall::join().
Returns the serial multiplexer for this modem call provider.
Allocates the next modem identifier in rotation that is not currently used by a call.
Returns true if a particular kind of call is part of the normal hold group. On some systems, video calls are separate from the call grouping for voice and fax calls. Returns true only for Voice by default. The type parameter indicates the type of call (Voice, Video, Fax, etc).
Returns the AT command to use to put the currently active calls on hold. The default implementation returns AT+CHLD=2.
See also activateCallCommand(), activateHeldCallsCommand(), and QModemCall::hold().
Returns the AT command to use to release all active calls. The default implementation returns AT+CHLD=1.
See also releaseCallCommand() and releaseHeldCallsCommand().
Returns the AT command to use to release the call modemIdentifier. The default implementation returns AT+CHLD=1modemIdentifier.
See also releaseActiveCallsCommand() and releaseHeldCallsCommand().
Returns the AT command to use to release all held calls. The default implementation returns AT+CHLD=0.
See also releaseCallCommand() and releaseActiveCallsCommand().
Called when the modem resets after PIN entry, to initialize the call provider.
Resolves a call mode as reported by the AT+CLCC command.
See also resolveRingType().
Resolves a call type on a "+CRING" notification into the particular type of call that it represents. The type is guaranteed to be in lower case on entry to this function.
See also resolveCallMode().
Sets the ringing state in the phone library, indicating a call from number of the type callType. Modem vendor plug-ins call this if the modem reports incoming calls with something other than RING or +CRING.
Either number or callType may be empty indicating that the information is not available yet. The information may become available shortly on a different unsolicited result code (e.g. +CLIP or +CCWA).
If modemIdentifier is not zero, it indicates the modem identifier for the call. If modemIdentifier is zero, the next available modem identifier is used.
See also QModemCall::callType(), QModemCall::number(), QModemCall::modemIdentifier(), and hasRepeatingRings().
Returns the modem service that this call provider is associated with.
Returns the AT command to use to reject the incoming call and set the busy state for the caller. The default implementation returns AT+CHLD=0.
See also acceptCallCommand() and QModemCall::hangup().
|
https://doc.qt.io/archives/qtextended4.4/qmodemcallprovider.html
|
CC-MAIN-2021-43
|
refinedweb
| 1,339
| 51.65
|
Java is a lot like C, which makes it relatively easy for C programmers to learn. But there are a number of important differences between C and Java, such as the lack of a preprocessor, the use of 16-bit Unicode characters, and the exception handling mechanism. This chapter explains those differences, so that programmers who already know C can start programming in Java right away!
This chapter also points out similarities and differences between Java and C++. C++ programmers should beware, though: While Java borrows a lot of terminology and even syntax from C++, the analogies between Java and C++ are not nearly as strong as those between Java and C. C++ programmers should be careful not to be lulled into a false sense of familiarity with Java just because the languages share a number of keywords!
One of the main areas in which Java differs from C, of course, is that Java is an object-oriented language and has mechanisms to define classes and create objects that are instances of those classes. Java's object-oriented features are a topic for a chapter of their own, and they'll be explained in detail in Section 3, Classes and Objects in Java.
In this chapter:
A program in Java consists of one or more class definitions, each of which has been compiled into its own .class file of Java Virtual Machine object code. One of these classes must define a method* main(), which is where the program starts running.
To invoke a Java program, you run the Java interpreter, java, and specify the name of the class that contains the main() method. You should omit the .class extension when doing this. Note that a Java applet is not an application--it is a Java class that is loaded and run by already running Java application such as a Web browser or applet viewer.
The main() method that the Java interpreter invokes to start a Java program must have the following prototype:
public static void main(String argv[])The Java interpreter runs until the main() method returns, or until the interpreter reaches the end of main(). If no threads have been created by the program, the interpreter exits. Otherwise, the interpreter continues running until the last thread terminates.
public class echo { public static void main(String argv[]) { for(int i=0; i < argv.length; i++) System.out.print(argv[i] + " "); System.out.print("\n"); System.exit(0); } }
String homedir = System.getProperty("user.home"); String debug = System.getProperty("myapp.debug");The Java interpreter automatically defines a number of standard system properties when it starts up. You can insert additional property definitions into the list by specifying the -D option to the interpreter:
%java -Dmyapp.debug=true myappSee Section 13, System Properties and Applet Parameters, for more information on system properties.
david.games.tetris.SoundEffects.play()
A file of Java source code should have the extension .java. It consists of one or more class definitions. If more than one class is defined in a .java file, only one of the classes may be declared public (i.e., available outside of the package), and that class must have the same name as the source file (minus the .java extension, of course). If a source file contains more than one class definition, those classes are compiled into multiple .class files.
-------------------------------------------------------------- Package name | Contents ---------------+---------------------------------------------- java.applet | Classes for implementing applets java.awt | Classes for graphics, text, windows, and GUIs java.awt.image | Classes for image processing java.awt.peer | Interfaces for a platform-independent GUI toolkit java.io | Classes for all kinds of input and output java.lang | Classes for the core language java.net | Classes for networking java.util | Classes for useful data types ---------------+----------------------------------------------
setenv CLASSPATH .:~/classes:/usr/local/classesThis tells Java to search in and beneath the specified directories for non-system classes.
If the package statement is omitted from a file, the code in that file is part of an unnamed default package. This is convenient for small test programs, or during development, because it means that the code can be interpreted from the current directory.
Any number of import statements may appear in a Java program. They must appear, however, after the optional package statement at the top of the file, and before the first class or interface definition in the file.
There are three forms of the import statement:
import package ; import package.class ; import package.* ;The first form allows the specified package to be known by the name of its last component. For example, the following import statement allows java.awt.image.ImageFilter to be called image.ImageFilter:
import java.awt.image;The second form allows the specified class in the specified package to be known by its class name alone. Thus, this import statement allows you to type Hashtable instead of java.util.Hashtable:
import java.util.Hashtable;Finally, the third form of the import statement makes all classes in a package available by their class name. For example, the following import statement is implicit (you need not specify it yourself) in every Java program:
import java.lang.*;It makes the core classes of the language available by their unqualified class names. If two packages imported with this form of the statement contain classes with the same name, it is an error to use either of those ambiguous classes without using its fully qualified name..
public final class Math { ... public static final double PI = 3.14159.....; ... }Note two things about this example. First, the C convention of using CAPITAL letters for constants is also a Java convention. Second, note the advantage Java constants have over C preprocessor constants: Java constants have globally unique hierarchial names, while constants defined with the C preprocessor always run the risk of a name collision. Also, Java constants are strongly typed and allow better type-checking by the compiler than C preprocessor constants.
Furthermore, Java does not make the distinction between declaring a variable or procedure and defining it that C does. This means that there is no need for C-style header files or function prototypes--a single Java object file serves as the interface definition and implementation for a class.
Java does have an import statement, which is superficially similar to the C preprocessor #include directive. What this statement does, however, is tell the compiler that the current file is using the specified classes, or classes from the specified package, and allows us to refer to those classes with abbreviated names. For example, since the compiler implicitly imports all the classes of the java.lang package, we can refer to the constant java.lang.Math.PI by the shorter name Math.PI.
While Java does not define explicit constructs for conditional compilation, a good Java compiler (such as Sun's javac) performs conditional compilation implicitly--that is, it does not compile code if it can prove that the code will never be executed. Generally, this means that code within an if statement testing an expression that is always false is not included. Thus, placing code within an if (false) block is equivalent to surrounding it with #if 0 and #endif in C.
Conditional compilation also works with constants, which, as we saw above, are static final variables. A class might define the constant like this:
private static final boolean DEBUG = false;With such a constant defined, any code within an if (DEBUG) block is not actually compiled into the class file. To activate debugging for the class, it is only necessary to change the value of the constant to true and recompile the class.
If two-byte characters seem confusing or intimidating to you, fear not. The Unicode character set is compatible with ASCII and the first 256 characters (0x0000 to 0x00FF) are identical to the ISO8859-1 (Latin-1) characters 0x00 to 0xFF. Furthermore, the Java language design and the Java String API make the character representation entirely transparent to you. If you are using only Latin-1 characters, there is no way that you can even distinguish a Java 16-bit character from the 8-bit characters you are familiar with. For more information on Unicode, see Section 16, The Unicode Standard.
Most platforms cannot display all 34,000 currently defined Unicode characters, so Java programs may be written (and Java output may appear) with special Unicode escape sequences. Anywhere within a Java program (not only within character and string literals), a Unicode character may be represented with the Unicode escape sequence \uxxxx, where xxxx is a sequence of one to four hexadecimal digits.
Java also supports all of the standard C character escape sequences, such as \n, \t, and \xxx (where xxx is three octal digits). Note, however, that Java does not support line continuation with \ at the end of a line. Long strings must either be specified on a single long line, or they must be created from shorter strings using the string concatenation (+) operator. (Note that the concatenation of two constant strings is done at compile-time rather than at run-time, so using the + operator in this way is not inefficient.)
There are two important differences between Unicode escapes and C-style escape characters. First, as we've noted, Unicode escapes can appear anywhere within a Java program, while the other escape characters can appear only in character and string constants.
The second, and more subtle, difference is that Unicode \u escape sequences are processed before the other escape characters, and thus the two types of escape sequences can have very different semantics. A Unicode escape is simply an alternative way to represent a character that may not be displayable on certain (non-Unicode) systems. Some of the character escapes, however, represent special characters in a way that prevents the usual interpretation of those characters by the compiler. The following examples make this difference clear. Note that \u0022 and \u005c are the Unicode escapes for the double-quote character and the backslash character.
// \" represents a " character, and prevents the normal // interpretation of that character by the compiler. // This is a string consisting of a double-quote character. String quote = "\""; // We can't represent the same string with a single Unicode escape. // \u0022 has exactly the same meaning to the compiler as ". // The string below turns into """: an empty string followed // by an unterminated string, which yields a compilation error. String quote = "\u0022"; // Here we represent both characters of an \" escape as // Unicode escapes. This turns into "\"", and is the same // string as in our first example. String quote = "\u005c\u0022";
--------+-------------------+---------+---------+---------------------------- --------+-------------------+---------+---------+---------------------------- Type | Contains | Default | Size |Min Value Max Value --------+-------------------+---------+---------+---------------------------- --------+-------------------+---------+---------+---------------------------- boolean | true or false | false | 1 bit | N.A. N.A. --------+-------------------+---------+---------+---------------------------- char | Unicode character | \u0000 | 16 bits | \u0000 \uFFFF --------+-------------------+---------+---------+---------------------------- byte | signed integer | 0 | 8 bits | -128 127 --------+-------------------+---------+---------+---------------------------- short | signed integer | 0 | 16 bits | -32768 32767 --------+-------------------+---------+---------+---------------------------- int | signed integer | 0 | 32 bits | -2147483648 | | | | 2147483647 --------+-------------------+---------+---------+---------------------------- long | signed integer | 0 | 64 bits | -9223372036854775808 | | | | 9223372036854775807 --------+-------------------+---------+---------+---------------------------- float | IEEE 754 | 0.0 | 32 bits | +-3.40282347E+38 | floating-point | | | +-1.40239846E-45 --------+-------------------+---------+---------+---------------------------- double | IEEE 754 | 0.0 | 64 bits | +-1.79769313486231570E+308 | floating-point | | | +-4.94065645841246544E-324 --------+-------------------+---------+---------+----------------------------
b = (i != 0); // integer-to-boolean: non-0 -> true; 0 -> false; i = (b)?1:0; // boolean-to-integer: true -> 1; false -> 0;.
In C, you can manipulate a value by reference by taking its address with the & operator, and you can "dereference" an address with the * and -> operators. These operators do not exist in Java. Primitive types are always passed by value; arrays and objects are always passed by reference.
Because objects are passed by reference, two different variables may refer to the same object:
Button a, b; p = new Button(); // p refers to a Button object q = p; // q refers to the same Button. p.setLabel("Ok"); // A change to the object through p... String s = q.getLabel(); // ...is also visible through q. // s now contains "Ok".
This is not true of primitive types, however:
int i = 3; // i contains the value 3. int j = i; // j contains a copy of the value in i. i = 2; // Changing i doesn't change j. // Now, i == 2 and j == 3.
Button a = new Button("Okay"); Button b = new Button("Cancel"); a = b;After these lines are executed, the variable a contains a reference to the object that b refers to. The object that a used to refer to is lost.
To copy the data of one object into another object, use the clone() method:
Vector b = new Vector; c = b.clone();After these lines run, the variable c refers to an object that is a duplicate of the object referred to by b. Note that not all types support the clone() method. Only classes that implement the Cloneable interface may be cloned. Look up java.lang.Cloneable and java.lang.Object.clone() in Section 23, The java.lang Package, for more information on cloning objects.
Arrays are also reference types, and assigning an array simply copies a reference to the array. To actually copy the values stored in an array, you must assign each of the values individually or use the System.arraycopy() method.
There are two reasons for these restrictions:
In Java, null is a reserved keyword, unlike NULL in C, where it is just a constant defined to be 0. null is an exception to the strong typing rules of Java--it may be assigned to any variable of reference type (i.e., any variable which has a class, interface, or array as its type).
null cannot be cast to any primitive type, including integral types and boolean. It should not be considered equal to zero (although it may well be implemented this way).
java.awt.Button b = new java.awt.Button(); ComplexNumber c = new ComplexNumber(1.0, 1.414);There are actually two other ways to create an object. First, you can create a String object simply by enclosing characters in double quotes:
String s = "This is a test";Because strings are used so frequently, the Java compiler provides this technique as a shortcut. The second alternative way to create objects is by calling the newInstance() method of a Class object. This technique is generally used only when dynamically loading classes, so we won't discuss it here.
The memory for newly created objects is dynamically allocated. Creating an object with new in Java is like calling malloc() in C to allocate memory for an instance of a struct. It is also, of course, a lot like using the new operator in C++. (Below, though, we'll see where this analogy to malloc() in C and new in C++ breaks down.)
ComplexNumber c = new ComplexNumber(); c.x = 1.0; c.y = -1.414;This syntax is reminiscent of accessing the fields of a struct in C. Recall, though, that Java objects are always accessed by reference, and that Java performs any necessary dereferencing for you. Thus, the dot in Java is more like -> in C. Java hides the fact that there is a reference here in an attempt to make your programming easier.
The other difference between C and Java when accessing objects is that in Java you refer to an object's methods as if they were fields in the object itself:
ComplexNumber c = new ComplexNumber(1.0, -1.414); double magnitude = c.magnitude();
In fact, this isn't the case. Java uses a technique called garbage collection to automatically detect objects that are no longer being used (an object is no longer in use when there are no more references to it) and to free them. This means that in our programs, we never need to worry about freeing memory or destroying objects--the garbage collector takes care of that.
If you are a C or C++ programmer, it may take some getting used to to just let allocated objects go without worrying about reclaiming their memory. Once you get used to it, however, you'll begin to appreciate what a nice feature this is. We'll discuss garbage collection in more detail in the next chapter..
The second way to create an array is with a static.
Arrays are automatically garbage collected, just like objects are.Mult the new array triangle[i][j] = read-only.*
The evidence suggests that arrays are, in fact, objects. Java defines enough special syntax for arrays, however, that it is still most useful to consider them a different kind of reference type than objects..
An important feature of String objects is that they are immutable--i.e., there are no methods defined that allow you to change the contents of a String. If you need to modify the contents of a String, you have to create a StringBuffer object from the String object, modify the contents of the StringBuffer, and then create a new String from the contents of the StringBuffer.
Note that it is moot to ask whether Java strings are terminated with a NUL character (\u0000) or not. Java performs run-time bounds checking on all array and string accesses, so there is no way to examine the value of any internal terminator character that appears after the last character of the string.
Both the String and StringBuffer classes are documented in Section 23, The java.lang Package, and you'll find a complete set of methods for string handling and manipulation there. Some of the more important String methods are: length(), charAt(), equals(), compareTo(), indexOf(), lastIndexOf(), and substring().
| | | | Prec. | Operator | Operand Type(s) | Assoc. | Operation Performed ------+------------+-------------------+--------+-------------------------------- 1 | ++ | arithmetic | R | pre-or-post increment | | | | (unary) | -- | arithmetic | R | pre-or-post decrement | | | | (unary) | +, - | arithmetic | R | unary plus, unary minus | ~ | integral | R | bitwise complement (unary) | ! | boolean | R | logical complement (unary) | (type) | any | R | cast 2 | *, /, % | arithmetic | L | multiplication, division, | | | | remainder 3 | +, - | arithmetic | L | addition, subtraction | + | String | L | string concatenation 4 | << | integral | L | left shift | >> | integral | L | right shift with sign | | | | extension | >>> | integral | L | right shift with zero | | | | extension 5 | <, <= | arithmetic | L | less than, less than or equal | >, >= | arithmetic | L | greater than, greater than | | | | or equal | instanceof | object, type | L | type comparison 6 | == | primitive | L | equal (have identical | | | | values) | != | primitive | L | not equal (have different | | | | values) | == | object | L | equal (refer to same object) | != | object | L | not equal (refer to different | | | | objects) 7 | & | integral | L | bitwise AND | & | boolean | L | boolean AND 8 | ^ | integral | L | bitwise XOR | ^ | boolean | L | boolean XOR 9 | | | integral | L | bitwise OR | | | boolean | L | boolean OR 10 | && | boolean | L | conditional AND 11 | || | boolean | L | conditional OR 12 | ?: | boolean, any, any | R | conditional (ternary) | | | | operator 13 | = | variable, any | R | assignment | *=, /=, | variable, any | R | assignment with operation | %=, | | | | +=, -=, | | | | <<=, >>=, | | | | >>>=, | | | | &=, ^=, | | | | |=, | | | ------+------------+-------------------+--------+-------------------------------. Variables with the same name outside of the loop are not changed.
Note that because variable declaration syntax also uses the comma, the Java syntax allows you to either specify multiple comma-separated initialization expressions or to declare and initialize multiple comma-separated variables of the same type. You may not mix variable declarations with other expressions. For example, the following for loop declares and initializes two variables that are valid only within the for loop. Variables by the same name outside of the loop are not changed.
int j = -3; // this j remains unchanged. for(int i=0, j=10; i < j; i++, j--) System.out.println("k = " + i*j); doing.
synchronized (expression) statementexpression.)
package games.tetris; import java.applet.*; import java.awt.*;.
.Ref e?
0 //OtherEx; } } }
Don't let this freedom make you sloppy, however! For someone reading your program, it is nice to have variable declarations grouped together in one place. As a rule of thumb, put your declarations at the top of the block, unless you have some good organizational reason for putting them elsewhere.
Java allows very flexible forward references. A method may refer to a variable or another method of its class, regardless of where in the current class the variable or method are defined. Similarly, it may refer to any class, regardless of where in the current file (or outside of the file) that class is defined. The only place that forward references are not allowed is in variable initialization. A variable initializer (for local variables, class variables, or instance variables) may not refer to other variables that have not yet been declared and initialized.
Java differs from C (and is similar to C++) in that methods that take no arguments are declared with empty parentheses, not with the void keyword. Also unlike C, Java does not have any void * type, nor does it require a (void) cast in order to correctly ignore the result returned by a call to a non-void method.
Java in a Nutshell | java.oreilly.com
|
http://oreilly.com/catalog/javanut/excerpt/
|
crawl-002
|
refinedweb
| 3,419
| 55.44
|
, to integrating with other hardware… even the Arduino.
It’s a reference book more than anything else, full of useful advice from someone who I figure must’ve spent a fair amount of time messing around with the Pi before writing the book. It gets you started, goes into some detail, and often includes references for more information.
First, a brief break down of what’s in the book…
Chapters 1 – 4 is basic hardware and software issues, like connecting peripherals and installing Raspbian. If you’ve just unpacked your first Pi, this’ll get you going.
Chapters 5 – 7 describe the Python language. If you’re brand new to it (most of your programming on the Pi will be in Python), start at the beginning. Otherwise, skim 6 and check out 7 for more advanced topics.
Chapter 8 talks about capturing images *and image processing* on the Pi. More on that below.
Chapters 9 – 14 covers a lot of different hardware (200 pages), starting with GPIO pins, breadboards and HATs, then covering LEDs, buzzers and relays, motors and robots, and various sensors and other modules. I would’ve liked a little more detail on some topics, like relays and the different uses for resistors, but the book leans towards breadth not depth.
Chapter 15 gets into the wonderful (sometimes misguided) world of IOT. There’s a lot of cool potential here, but when you come up with The Next Big Thing make sure you don’t end up on here. 😏
Chapter 16 discusses possibilities for interfacing with an Arduino using Firmata and pyFirmata, or using Arduino components (like LCD displays) via the AlaMode interface board. Good stuff.
A couple Python take-aways:
A call to
random.choice(your_array)will select a random item from an array.
The pickle module lets you save a data structure to a file and reload it later – very convenient!
import pickle # Save a dictionary into a pickle file. pickle.dump({"lion":"yellow", "kitty":"red"}, open("save.p", "wb") # Load the dictionary back from the pickle file. favorite_color = pickle.load(open("save.p", "rb")) ### = {"lion":"yellow", "kitty":"red"}
- There’s a couple pages on threads. I used threads when I was first experimenting with pulse-width modulation, and again later to create a flickering candle using an RGB LED. If you want to do more than one thing at a time, or just one intense thing without locking up your application, this is worth a read.
Other important take-aways:
- There are some excellent “rules of thumb” for using the GPIO pins, which I’ll paraphrase here. - Do not put more than 3.3 V on any GPIO pin used as input.
- Do not draw more than 16mA per pin set to output.
- Do not draw more than 50mA total for all pins on older 26-pin Pi, or more than 100mA total on newer 40-pin Pi.
- Do not power the Pi with more than 5V.
- Do not draw more than 250mA from 5V supply pins.
- In other words, the GPIO pins will light an LED but larger devices should use an external power source.
- Care must be taken when tying a device that outputs over 3.3v, such as an Arduino that outputs 5v, to the Raspberry Pi. To safely connect the two devices requires a converter, like this logic level converter from SparkFun, or a HAT that has it built in like the author’s own RasPi Robot Board that allows for running two motors off a battery pack.
- Pi HATs should (but some don’t) contain an EEPROM chip, which Raspbian will use at some point in the future to auto-configure itself when a HAT is connected (such as downloading required software).
My original idea was to get a few ideas from the book for projects I’d like to try out, or at least some concepts to demonstrate, and here’s what I came up with:
- Process images using SimpleCV and Haar cascade files to detect features.
- Extract text from an image using tesseract OCR.
- Turn the Raspberry Pi into an FM transmitter!
- Setup a Python web server with Bottle. It seems light-weight like Sinatra… maybe I’ll write up a comparison at some point.
- Use the MCPI API to control the Pi edition of Minecraft that comes with Raspbian.
- Experiment with the serial port (Rx and Tx pins) using PySerial. Not many details in the book, but maybe PySerial has good docs.
- Create a GUI using TKinter.
- [Done] Demonstrate the use of charlieplexing for lighting multiple LEDs with as few connections as possible. With this design, n pins can handle n2 – n LEDs, so 4 GPIO pins can support 12 LEDs… and 10 pins can support 90!
- Track conditions in the indoor garden (grow-light box) I’m planning to make with the kids, by using a DS18b20 temperature sensor or a Sense HAT.
- Create a fruit keyboard using the Capacitive Touch HAT or Explorer HAT Pro. 🍎🍐🍊🍋
Final thoughts:
Raspberry Pi Cookbook is a reference book more than anything else. It gets you started, describes some of the detail, and often includes references for more reading. The layout lets you quickly scan the table of contents for the material you’re interested in and just skip to that section.
The author must’ve spent a fair amount of time messing around with the Pi before writing the book. His advice is solid, and he seems to have a good grasp on everything he covers. He uploaded his code samples to GitHub too, which is a nice bonus, although they’ll probably make more sense in the context of the book.
My recommendation? Grab a copy and keep it nearby for quick reference!
Have you read it? What’d you think.. good, bad.. like it, not like it? Did you get something out of it that I didn’t mention?
Share your thoughts below!
(The link above is an aff link, but I wouldn't post it if I wouldn't recommend it to a friend!)
|
https://grantwinney.com/cooking-with-simon-monk-raspberry-pi-cookbook/
|
CC-MAIN-2017-17
|
refinedweb
| 1,008
| 72.66
|
Tony Docherty wrote:Welcome to the Ranch
I've edited your post by changing the 'javadoc' tags you used to 'code' tags so it displays properly on this forum.
Greg Charles wrote:Hi Mona, welcome to JavaRanch!
It would be helpful if you posted the warnings you are getting. They probably are related to a new(ish) concept in Java called generics. Try changing the class declaration to:
public class FractionList extends LinkedList<Fraction>
and see if that makes them go away.
Tony Docherty wrote:Your code contains IndexOutOfBoundException which should probably be IndexOutOfBoundsException
The warnings are due to you not using generics, see.
Note using generics is not compulsory (but is definitely recommended) and not using generics will not stop your code from running.
Carey Brown wrote:If you have
for( int i=0 ; i < list.size() ; i++ )
{
Object a = list.get(i); // always works
Object b = list.get(i+1); // will fail at end of list
}
|
http://www.coderanch.com/t/600801/java/java/solving-Linked-List
|
CC-MAIN-2014-42
|
refinedweb
| 158
| 66.64
|
Followers 0 Following 0 Joined Last Online
namnium1125
Thank you for look.
I'm Japanese. So I might write some wrong English. then, please excuse me.
>>> a = 1 >>> b = 2 >>> >>> print(a,b)
When I run this code, I thought I would get the result like this:
1 2
but in fact, I got this result:
(1, 2)
then, I thought I got it simply because the version of python is 2.7 and print statement output tuple
(1, 2).
And I tryed this code:
import sys print(sys.version)
its result:
2.7.12 (default, Nov 20 2016, 12:12:11) [GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)]
My thought was correct. but...
The version in Pythonista3 is 3, but Why in the world is the version of python in Editorial 2.7?!
And I want use Python 3.x in Editorial.
If you know somethings about this, could you tell me?
Thank you.
|
https://forum.omz-software.com/user/namnium1125
|
CC-MAIN-2022-33
|
refinedweb
| 162
| 86.5
|
Jun 19, 2010 03:13 PM|Jags_464|LINK
How group by and sum values in DataTable??
I have a DataTable that looks like this:
Id Value
1 4.0
1 5.0
3 1.0
2 2.0
3 3.0
I want to end up with (probably a new DataTable) that contains the sum of
values grouped by Id like this:
Id SumOfValue
1 9.0
2 2.0
3 4.0
pls help me.......
Thanks!
datatable dataset
Jun 19, 2010 04:06 PM|PeteNet|LINK
here's how you would do it with a query and a bit of Reflection:
using System.Data;
using System.Reflection;
protected void Page_Load(object sender, EventArgs e) { DataTable dt = new DataTable(); dt.Columns.Add(new DataColumn("ID", typeof(Int32))); dt.Columns.Add(new DataColumn("Value", typeof(Decimal))); dt.Rows.Add(1, 4.0M); dt.Rows.Add(1, 5.0M); dt.Rows.Add(3, 1.0M); dt.Rows.Add(2, 2.0M); dt.Rows.Add(3, 3.0M); var query = from r in dt.AsEnumerable() group r by r.Field<int>(0) into groupedTable select new { id = groupedTable.Key, sumOfValue = groupedTable.Sum(s => s.Field<decimal>("Value")) }; DataTable newDt = ConvertToDataTable(query); } public DataTable ConvertT; }
datatable sum group by
Jun 19, 2010 04:40 PM|PeteNet|LINK
MarkRaeThe Compute method of the DataTable object does all of the above in one line of code...)
Jun 19, 2010 05:29 PM|PeteNet|LINK
MarkRae<div>"PeteNet" wrote in message news:3935337@forums.asp.net...</div>
<div>
MarkRae:</div> <div>The Compute method of the DataTable object does all of the above in one line of code...</div>MarkRae:</div> <div>The Compute method of the DataTable object does all of the above in one line of code...</div>)
Regards,
Peter
My mistake - apologies.<div> </div> <div>
--
</div>
Mark, accepted.
Thank you.
Jun 19, 2010 05:50 PM|PeteNet|LINK
MarkRae<div>This is as close as I could get:</div> <div> </div> <div>
DataTable objDT1 = new DataTable();
objDT1.Columns.Add("ID", typeof(int));
objDT1.Columns.Add("Value", typeof(decimal))
.....etc</div>
good attempt. :)
..that was the kind of solution I was offering in the first place.
...and then we can take it further per this requirement:
Jags_464I want to end up with (probably a new DataTable)
and convert the sequence into a DataTable.
the public function I used:
public DataTable ConvertToDataTable<T>(IEnumerable<T> varlist)
is normally used as an Extension method which effectively adds it as a method on the object.
..and finally, goes give Jags_464 exactly what he requires.
Jun 23, 2010 07:57 AM|Jags_464|LINK
Thanks for ur replies... I got the solution to the problem... I first sorted datatable based on the Id.. then i copied those rows having same Id to having same id to a DataRow[] by runnig Dttable.Select("Query") and then i run a foreach loop through the
DataRow[] to Compute the Sum...
Aug 18, 2010 01:04 AM|snteran|LINK
Hello, I have something very similar I am trying to accomplish. I found this post and have been trying to use some of the reference information. I am very new to C # and programming.
I have a DataTable dt, that holds two columns with serveral rows.
Branch Invoices
101 50
102 34
103 45
104 50
105 37
So now that I have this dt, I would like to sum branches 101 and 102 and then I want to sum 103-105 and then I need to create a grand total of the two sums.
Not sure what other information is needed, I do apologize if I did not give enough requirements.
Thanks
10 replies
Last post Aug 18, 2010 01:04 AM by snteran
|
http://forums.asp.net/t/1570562.aspx
|
CC-MAIN-2013-48
|
refinedweb
| 620
| 68.16
|
Problem Statement
You are given an array of integers. Your task is to find out the maximum sum subsequence within the array in such a way that the numbers in subsequence should be ordered in a sorted manner in increasing order. A subsequence is nothing but a sequence that we get if we remove some of the elements from the initial array.
Example
arr[] = {2,4,5,10,1,12}
33
Explanation: Sum of {2, 4, 5, 10, 12} ⇒ 33. We have taken the whole initial input array except the element at index 4(0-based indexing) to satisfy our sorted condition. This subsequence gives us the best result. Any other subsequence will result in a sum less than the current sum.
arr[] = {3,5,7,1,20,4,12}
35
Explanation: Sum of {3, 5, 7, 20} ⇒ 35. Here, we have removed 1 and 4 which makes the rest of the array sorted. Then the remaining elements make maximum sum increasing subsequence.
Algorithm for Maximum Sum Increasing Subsequence
1. Declare an array say maxSumIS as the same size as the length of an array. 2. Set the output to 0. 3. Copy each element of an array into the created array. 4. Traverse the array from i=1 to i<n(length of the array) Now in another loop, from j=0 to j<i, Check if arr[i] is greater than arr[j] and maxSumIS[i] is less than maxSumIS[j]+arr[i]N, Then update the value of maxSumIS[i] = maxSumIS[j] + arr[i]. 5. Traverse the array, and find out the maximum of all the elements from maxSumIS and return that value.
Explanation
We have given an array of integers that may or may not be sorted. We try to find the maximum sum increasing subsequence. That should be in increasing order as well. So, we will create an array for that, with the same size as of the given array. Then we set the output to 0. This output value will be helping us in finding a maximum among all elements.
We will traverse the array for the first time. The first time will be for copying the value of a given array into an array we created maxSumIS[]. This maxSumIS[] array is updated whenever our condition is satisfied. So we will first traverse the array from i=1 because we are going to use the first index in maxSumIS array. That’s why we have just taken the value for the second loop from 0 to i. We are going to check the condition if arr[i] is greater than the arr[j], and also maxSumIS[j] is less than the sum of the previous two elements according to the current indices i and j. If both conditions are satisfied then we update the value of maxSumIS[i] to the sum of the current two indices’ element as maxSumIS[j] and arr[i].
This is because we want to find the subsequence of the maximum sum in increasing order only, that’s why we are taking two elements simultaneously.
After this, we have to find out the maximum of all the elements we stored and updated in an array maxSumIS. We can traverse the whole array or we can find the maximum number using any function. But we need to return the maximum number among all the elements of the array maxSumIS.
Code for Maximum Sum Increasing Subsequence
C++ Code
#include<iostream> using namespace std; int getMaximumSumIS(int arr[], int n) { int i, j, output = 0; int maxSum; } int main() { int arr[] = {2,4,5,10,1,12}; int n = sizeof(arr)/sizeof(arr[0]); cout << "Total sum of Maximum Sum Increasing Subsequence : " << getMaximumSumIS( arr, n ); return 0; }
Total sum of Maximum Sum Increasing Subsequence : 33
Java Code
class MaximumSumIncreasingsubsequence { public static int getMaximumSumIS(int arr[], int n) { int i, j, output = 0; int maxSumIS[] = new; } public static void main(String args[]) { int arr[] = {2,4,5,10,1,12}; int n = arr.length; System.out.println("Total sum of Maximum Sum Increasing Subsequence : "+getMaximumSumIS(arr, n)); } }
Total sum of Maximum Sum Increasing Subsequence : 33
Complexity Analysis
Time Complexity
Since here we have two nested loops outer loop from 0 to n-1 and inner loop from to i. Thus algorithm has polynomial time complexity. O(n2) where “n” is the number of elements in the array.
Space Complexity
Here, we have only 1D arrays of size n contributing linear space complexity to the problem. O(n) where “n” is the number of elements in the array.
|
https://www.tutorialcup.com/interview/dynamic-programming/maximum-sum-increasing-subsequence-2.htm
|
CC-MAIN-2021-39
|
refinedweb
| 763
| 53.31
|
.
Here an attempt in Python. I follow a simple approach. First dump the english dictionary in a trie. Then take the signature word and try to find all words in the trie by expanding with double consonants, vowels and the characters from the signature word.
Another Python version kore in line with the Programming Praxis solution. This one is much faster and much shorter.
In clojure:
@Paul – I don’t think your second version compresses words correctly if the have two of the same consonant separated by vowels, e.g., “people” should compress to “ppl”, but yours compresses to “pl”.
@Mike. That is correct. I saw it already, but did not post a correction. Thanks for pointing out the word lists from 12dicts.
“””
Try to match all the words in encrypt text
“””
import re
ENCRYPT_TEXT = “Sm ppl cmprs txt msgs by rtnng only ths vwls tht bgn ”
ENCRYPT_TEXT += “a wrd and by rplcng dbld ltrs wth sngl ltrs”
ENCRYPT_TEXT = ENCRYPT_TEXT.split()
def main():
“””
The main function
“””
word_dict = {}
answer = {}
# Build the dictionary by first letter
for capital in range(ord(‘a’), ord(‘z’)+1):
word_dict[chr(capital)] = []
with open(“/usr/share/dict/words”, “r”) as dict_file:
for word in dict_file.readlines():
word = word.strip()
if len(word) > 0:
word_dict[word[0].lower()].append(word)
for capital in word_dict.keys():
word_dict[capital] = “\n”.join(word_dict[capital])
# For each word, try to find it in dictionary
for word in ENCRYPT_TEXT:
reg_re = word[0] + “”.join([".*"+c for c in word[1:]]) + “.*”
reg_re = reg_re.lower()
answer[word] = re.findall(reg_re, word_dict[reg_re[0]])
print answer
main()
|
http://programmingpraxis.com/2013/07/02/decoding-text-speak/
|
CC-MAIN-2014-15
|
refinedweb
| 261
| 60.51
|
Singleton class in java is a class that can have only a single object. It also means it can have just one instance of a class at the same time. In java, even when we try to make another object from the singleton class. It will only point towards the earlier created instance. So any changes in the class, they will only reflect upon the single created instance of that class. The singleton class has the power to control the instantiation of the class, therefore they also have the flexibility that it can change the whole process of creating an instance.
We need to make some modifications to make a class singleton.
- By making the singleton class private.
- We write a static method that can only return an object of singleton type. We can do this by delaying the creation of an object, this process is also called as lazy initialization. The process will enhance the performance as the dependency on the creation of an object is only handled when it is needed.
- It needs to provide a global access point to get the instance of a class i.e. object.
- We also create a private attribute that points towards the singleton object of the class.
To understand this class better, we need to understand the difference between normal class and singleton class
Normal vs Singleton class
- For normal class, we initialize constructor but for the singleton class we use getInstance() method.
- The singleton class can implement interfaces whereas normal classes do not.
- Singleton classes seem to be more flexible than the normal classes.
- We can extend singleton class but we cannot extend the normal class.
To know more about singleton class, we have singleton design patterns in Java. These patterns help us in providing best ways to create an object of the class. The patterns have a single class for the creation of an object. It makes sure that only single object gets created. We follow some approaches related to singleton design patterns such as lazy initialization, eager initialization, and thread safe singleton, enum singleton etc. It is just like a code library in java with functions used for number of coding approaches on various techniques by computer programmers.
Singleton design patterns in Java:
- The user can create only one instance of a class throughout. Hence, it restricts the instantiation of a class again.
- Private constructor restricts the instantiation of a class from the other classes during the process.
Java Program on Singleton Class
public class MySingleton { private static MySingleton Obj; static{ Obj = new MySingleton(); } private MySingleton(){ } public static MySingleton getInstance(){ return Obj; } public void testMe(){ System.out.println("Welcome to Developer Helps !!"); } public static void main(String a[]){ MySingleton ms = getInstance(); ms.testMe(); } }
The output of the following program will be:
Welcome to Developer Helps !!
Can singleton class be inherited in Java?
As we know that we cannot inherit a static class in java. However, a singleton class is inherited. It can have a base class of its own from which it can inherit the methods and functions. These classes can be serialized as well. The result of the class which is inherited by the singleton will also be a singleton. Hence, proper patterns should be followed for this. If a user wants that singleton class should not be inherited in an application, he can make the use of the ‘sealed’ keyword. Once the class is sealed, no one can further inherit the class.
Things you should know:
If a singleton class is serializable, we can serialize the singleton object also. We can later deserialize it but it will not return the singleton object.
To resolve this thing, we can use overriding by using methods such as the readResolve() method. It will enforce the singleton class just after the object is deserialized. As a result, it will return the singleton object of the class.
|
https://www.developerhelps.com/singleton-class-in-java/
|
CC-MAIN-2021-31
|
refinedweb
| 646
| 56.76
|
Data replication protocolDownload PDF
Info
- Publication number
- US7571215B2US7571215B2 US09975590 US97559001A US7571215B2 US 7571215 B2 US7571215 B2 US 7571215B2 US 09975590 US09975590 US 09975590 US 97559001 A US97559001 A US 97559001A US 7571215 B2 US7571215 B2 US 7571215B2
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- slave
- master
- server
- data
-/305,986, filed Jul. 16, 2001, entitled DATA REPLICATION PROTOCOL, incorporated herein by reference.
The following application is cross-referenced and incorporated herein by reference:
U.S. patent application Ser. No. 90/975,587 entitled “LAYERED ARCHITECTURE FOR DATA REPLICATION”, inventors Dean Bernard Jacobs, Reto Kramer, and Ananthan Bala Srinivasan, filed on Oct. for transferring data. The invention relates more specifically to a system and method for replicating data over a network.
There are several types of distributed processing systems. Generally, a distributed processing system includes a plurality of processing devices, such as two computers coupled through a communication medium. One type of distributed processing system is a client/server network. A client/server network includes at least two processing devices, typically a central server and a client. Additional clients may be coupled to the central server, there may be multiple servers, or the network may include only servers coupled through the communication medium.
In such a network environment, it is often desirable to send applications or information from the central server to a number of workstations and/or other servers. Often, this may involve separate installations on each workstation, or may involve separately pushing a new library of information from the central server to each individual workstation and/or server. These approaches can be time consuming and are an inefficient use of resources. The separate installation of applications on each workstation or server also introduces additional potential sources of error.
Ideally, the sending of information should be both reliable in the face of failures and scalable, so that the process makes efficient use of the network. Conventional solutions generally fail to achieve one or both of these goals. One simple approach is to have a master server individually contact each slave and transfer the data over a point-to-point link, such as a TCP/IP connection. This approach leads to inconsistent copies of the data if one or more slaves are temporarily unreachable, or if the slaves encounter an error in processing the update. At the other extreme are complex distributed agreement protocols, which require considerable cross-talk among the slaves to ensure that all copies of the data are consistent.
The present invention includes a method for replicating data from a master server to at least one slave or managed server, such as may be accomplished on a network. In the method, it may be determined whether the replication should be accomplished in a one or two phase method. If the replication is to be accomplished in a one phase method, a version number may be sent that corresponds to the current state of the data on the master server. This version number may be sent to every slave server on the network, or only a subset of slave servers. The slave servers receiving the version number may then request that a delta be sent from the master. The delta may contain data necessary to update the data on that slave to correspond to the current version number.
If the replication is to be accomplished in a two phase method, a packet of information may be sent from the master to each slave, or a subset of slaves. Those slaves may then respond to the master server whether they can commit the packet of information. If at least some of the slaves can commit the data, the master may signal to those slave that they should process the commit. After processing the commit, those slaves may update to the current version number. If any of the slaves are unable to process the commit, the commit may be aborted.
The present invention provides for the replication of data or other information, such as from a master server, or “administration” server (“Admin server”), to a collection of slave servers, or “managed” servers. This replication can occur over any appropriate network, such as a conventional local area network or ethernet. In one embodiment, a master server owns the original record of all data on the network, to which any updates are to be applied. A copy of the data, together with updates as they occur, can be transmitted to each slave server. One example application involves the distribution of configuration information from an Admin server to a collection of managed servers.
In one system in accordance with the present invention, it may be necessary for a service, such as a Data Replication Service (DRS), to distribute configuration and deployment information from an Admin Server to managed servers in the appropriate domain. Large data items can be distributed over point-to-point connections, such as Transmission Control Protocol (“TCP”), since a multicast protocol like User Datagram Protocol (“UDP”) does not have flow control, and can overwhelm the system. Remote Method Invocation (RMI), Hypertext Transfer Protocol (HTTP), or a similar protocol may be used for point-to-point connections.
Managed servers can also persistently cache data on local disks. Without such caching, an unacceptable amount of time may be required to transfer the necessary data. The ability of the managed servers to cache is important, as it increases the speed of startup by reducing the amount of startup data to be transferred. Caching can also allow startup and/or restart if the Admin Server is unreachable. Restart may be a more attractive option, and it may be the case that the Admin server directs a server to start. Caching, however, can provide the ability to start the domain without the Admin Server being available.
As shown in the domain structure 100 of
Updates to data on the Admin Server can be packaged as incremental deltas between versions. The deltas can contain configuration and/or other information to be changed. It may be preferable to update the configuration while the domain is running, as it may be undesirable to take the system offline. In one embodiment, the configuration changes happen dynamically, as they are pushed out by the Admin Server. Only the changes to the configuration are sent in the deltas, as it may be unnecessary, and unduly cumbersome, to send the full configuration each time.
A protocol in accordance with the present invention integrates two methods for the distribution of updates, although other appropriate methods may be used accordingly. These distribution methods may be referred to as a one-phase method and a two-phase method, and can provide a tradeoff between consistency and scalability. In a one-phase method, which may favor scalability, each slave can obtain and process updates at its own pace. Slaves can get updates from the master at different times, but can commit to the data as soon as it is received. A slave can encounter an error in processing an update, but in the one-phase method this does not prevent other slaves from processing the update.
In a two-phase method in accordance with the present invention, which may favor consistency, the distribution can be “atomic”, in that either all or none of the slaves successfully process the data. There can be separate phases, such as prepare and commit phases, which can allow for a possibility of abort. In the prepare phase, the master can determine whether each slave can take the update. If all slaves indicate that they can accept the update, the new data can be sent to the slaves to be committed in the commit phase. If at least one of the slave servers cannot take the update, the update can be aborted and there may not be a commit. In this case, the managed servers can be informed that they should roll back the prepare and nothing is changed. Such a protocol in accordance with the present invention is reliable, as a slave that is unreachable when an update is committed, in either method, eventually gets the update.
A system in accordance with the present invention can also ensure that a temporarily unavailable server eventually receives all updates. For example, a server may be temporarily isolated from the network, then come back into the network without restarting. Since the server is not restarting, it normally will not check for updates. The server coming back into the network can be accounted for by having the server check periodically for new updates, or by having a master server check periodically to see whether the servers have received the updates.
In one embodiment, a masterserver regularly sends multicast “heartbeats” to the slave servers. Since a multicast approach can be unreliable, it is possible for a slave to miss arbitrary sequences of heartbeats. For instance, a slave server might be temporarily disconnected from the network due to a network partitioning, or the slave server itself might be temporarily unavailable to the network due, causing a heartbeat to be missed. Heartbeats can therefore contain a window of information about recent updates. Such information about previous updates may be used to reduce the amount of network traffic, as explained below.
There can be at least two layers within each master and each slave: a user layer and a system layer (or DRS layer). The user layer can correspond to the user of the data replication system. A DRS layer can correspond to the implementation of the data replication system itself. The interaction of these participants and layers is shown in
As shown in the startup diagram 200 of
registerMaster(DID, verNum, listener)
registerSlave(DID, verNum, listener)
where DID is an identifier taken from knowledge of well-known DIDs and refers to the object of interest, verNum is taken from the local persistent store as the user's current version number, and listener is an object that will handle upcalls from the DRS layer. The upcall can call a method on the listener object. The master can then begin to send heartbeats, or periodic deltas, with the current version number. A container layer 210 is shown, which can include containers adapted to take information from the slave user 204. Examples of possible containers include enterprise Java beans, web interfaces, and J2EE (Java 2 Platform, Enterprise Edition) applications. Other applications and/or components can plug into the container layer 210, such as an administration client 212. Examples of update messaging between the User and DRS layers are shown for the one phase method in
The master DRS layer begins multicasting heartbeats 408, containing the current version number of the data on the master, to the slave DRS layer 410. The slave DRS layer 410 requests the current version number 412 for the slave from the slave user layer 414. The slave user layer 414 then responds 416 to the slave DRS layer 416 with the slave version number. If the slave is in sync, or already is on the current version number, then no further requests may be made until the next update. If the slave is out-of-sync and the slave is in the scope of the update, the slave DRS layer 410 can request a delta 420 from the master DRS layer 406 in order to update the slave to the current version number of the data on the master. The master DRS layer 406 requests 422 that the master user layer 402 create a delta to update the slave. The master user layer 402 then sends the delta 424 to the master DRS layer 406, which forwards the delta 426 and the current version number of the master to the slave DRS layer 410, which sends the delta 426 to the slave user to be committed. The current version number is sent with the delta in case the master has updated since the heartbeat 408 was received by the slave.
The master DRS layer 406 can continue to periodically send a multicast heartbeat containing the version number 408 to the slave server(s). This allows any slave that was unavailable, or unable to receive and process a delta, to determine that it is not on the current version of the data and request a delta 420 at a later time, such as when the slave comes back into the system.
The master DRS layer 506 sends the new delta 508 to the slave DRS layer 510. The slave DRS layer 510 sends a prepare request 512 to the slave user layer 514 for the new delta. The slave user layer 514 then responds 516 to the slave DRS layer 510 whether or not the slave can process the new delta. The slave DRS layer forwards the response 518 to the master DRS layer 506. If the slave cannot process the request because it is out-of-sync, the master DRS layer 506 makes an upcall 520 to the master user layer 502 to create a delta that will bring the slave in sync to commit the delta. The master user layer 502 sends the syncing delta 522 to the master DRS layer, which forwards the syncing delta 524 to the slave DRS layer 510. If the slave is able to process the syncing delta, the slave DRS layer 510 will send a sync response 526 to the master DRS layer 506 that the slave can now process the new delta. If the slave is not able to process the syncing delta, the slave DRS layer 510 will send the appropriate sync response 526 to the master DRS layer 506. The master DRS layer 506 then heartbeats a commit or abort message 528 to the slave DRS layer 510, depending on whether or not the slave responded that it was able to process the new delta. If all slave were able to prepare the delta, for example, the master can heartbeat a commit signal. Otherwise, the master can heartbeat an abort signal. The heartbeats also contains the scope of the update, such that a slave knows whether or not it should process the information contained in the heartbeat.
The slave DRS layer forwards this command 530 to the slave user layer 514, which then commits or aborts the update for the new delta. If the prepare phase was not completed within a timeout value set by the master user layer 502, the master DRS layer 506 can automatically heartbeat an abort 528 to all the slaves. This may occur, for example, when the master DRS layer 506 is unable to contact at least one of the slaves to determine whether that slave is able to process the commit. The timeout value can be set such that the master DRS layer 506 will try to contact the slave for a specified period of time before aborting the update.
For an update in a one-phase method, these heartbeats can cause each slave to request a delta starting from the slave's current version of the data. Such a process is shown in the flowchart of
For an update in a two-phase method, the master can begin with a prepare phase in which it pro-actively sends each slave a delta from the immediately-previous version. Such a process is shown in the flowchart of
A slave can be configured to immediately start and/or restart using cached data, without first getting the current version number from the master. As mentioned above, one protocol in accordance with the present invention allows slaves to persistently cache data on local disks. This caching decreases the time needed for system startup, and improves scalability by reducing the amount of data needing to be transferred. The protocol can improve reliability by allowing slaves to startup and/or restart if the master is unreachable, and may further allow updates to be packaged as incremental deltas between versions. If no cache data exists, the slave can wait for the master or can pull the data itself. If the slave has the cache, it may still not want to start out of sync. Startup time may be decreased if the slave knows to wait.
The protocol can be bilateral, in that a master or slave can take the initiative to transfer data, depending upon the circumstances. For example, a slave can pull a delta from the master during domain startup. When the slave determines it is on a different version than the delta is intended to update, the slave can request a delta from its current version to the current system version. A slave can also pull a delta during one-phase distribution. Here, the system can read the heartbeat, determine that it has missed the update, and request the appropriate delta.
A slave can also pull a delta when needed to recover from exceptional circumstances. Exceptional circumstances can exist, for example, when components of the system are out of sync. When a slave pulls a delta, the delta can be between arbitrary versions of the data. In other words, the delta can be between the current version of the slave and the current version of the system (or domain), no matter how many iterations apart those versions might be. In this embodiment, the availability of a heartbeat and the ability to receive deltas can provide synchronization of the system.
In addition to the ability of a slave to pull a delta, a master can have the ability to push a delta to a slave during two-phase distribution. In one embodiment, these deltas are always between successive versions of the data. This two-phase distribution method can minimize the likelihood of inconsistencies between participants. Slave users can process a prepare as far as possible without exposing the update to clients or making the update impossible to roll back. This can include such tasks as checking the servers for conflicts. If any of the slaves signals an error, such as by sending a “disk full” or “inconsistent configuration” message, the update can be uniformly rolled back.
It is still possible, however, that inconsistencies may arise. For instance, there may be errors in processing a commit, for reasons such as an inability to open a socket. Servers can also commit and expose the update at different times. Because the data cannot reach every managed server at exactly the same time, there can be some rippling effect. The use of multicasting can provide for a small time window, in an attempt to minimize the rippling effect. In one embodiment, a prepared slave will abort if it misses a commit, whether it missed the signal, the master crashed, etc.
A best-effort approach to multicasting can cause a slave server to miss a commit signal. If a master crashes part way through the commit phase, there may be no logging or means for recovery. There may be no way for the master to tell the remaining slaves that they need to commit. Upon abort some slaves may end up committing the data if the version is not properly rolled back. In one embodiment, the remaining slaves could get the update using one-phase distribution. This might happen, for example, when a managed server pulls a delta in response to a heartbeat received from an Admin server. This approach may maintain system scalability, which might be lost if the system tied down distribution in order to avoid any commit or version errors.
Each data item managed by the system can be structured to have a unique, long-lived domain identifier (DID) that is well-known across the domain. A data item can be a large, complex object made up of many components, each relevant to some subset of the servers in the domain. Because these objects can be the units of consistency, it may be desirable to have a few large objects, rather than several tiny objects. As an example, a single data item or object can represent all configuration information for a system, including code files such as a config.xml file or an application-EAR file. A given component in the data item can, for example, be relevant to an individual server as to the number of threads, can be relevant to a cluster as to the deployed services, or can be relevant to the entire domain regarding security certificates. A delta between two versions can consist of new values for some or all of these components. For example, the components may include all enterprise Java beans deployed on members of the domain. A delta may include changes to only a subset of these Java beans.
The “scope” of a delta can refer to the set of all servers with a relevant component in the delta. An Admin server in accordance with the present invention may be able to interpret a configuration change in order to determine the scope of the delta. The DRS system on the master may need to know the scope in order to send the data to the appropriate slaves. It might be a waste of time and resources to send every configuration update to every server, when a master may only need to only touch a subset of servers in each update.
To control distribution, the master user can provide the scope of each update along with the delta between successive versions. A scope may be represented as a set of names, referring to servers and/or clusters, which may be taken from the same namespace within a domain. In one embodiment, the DRS uses a resolver module to map names to addresses. A cluster name can map to the set of addresses of all servers in that cluster. These addresses can be relative, such as to a virtual machine. The resolver can determine whether there is an intervening firewall, and return either an “inside” or “outside” address, relating to whether the server is “inside the firewall” as is known and used in the art. An Admin server or other server can initialize the corresponding resolver with configuration data.
Along with the unique, long-lived domain identifier (DID) for each managed data item, each version of a data item can also have a long-lived version number. Each version number can be unique to an update attempt, such that a server will not improperly update or fail to update due to confusion as to the proper version. Similarly, the version number for an aborted two-phase distribution may not be re-used. The master may be able to produce a delta between two arbitrary versions given just the version numbers. If the master cannot produce such a delta, a complete copy of the data or application may be provided.
It may be desirable to keep the data replication service as generic as possible. A few assumptions may therefore be imposed upon the users of the system. The system may rely on, for example, three primary assumptions:
- the system may include a way to increment a version number
- the system may persistently store the version number on the master as well as the slave
- the system may include a way to compare version numbers and determine equality
These assumptions may be provided by a user-level implementation of a DRS interface, such as an interface “VersionNumber.” Such an interface may allow a user to provide a specific notion and implementation of the version number abstraction, while ensuring that the system has access to the version number attributes. In Java, for example, a VersionNumber interface may be implemented as follows:
A simplistic implementation of this abstraction that a user could provide to the system would be a large, positive integer. The implementation may also ensure that the system can transmit delta information via the network from the master to the slaves, referred to in the art as being “serializable.”
If using the abstraction above, it may be useful to abstract from a notion of the detailed content of a delta at the user level. The system may require no knowledge of the delta information structure, and in fact may not even be able to determine the structure. The implementation of the delta can also be serializable, ensuring that the system can transmit delta version information via the network from the master to the slaves.
It may be desirable to have the master persistently store the copy of record for each data item, along with the appropriate DID and version number. Before beginning a two-phase distribution, the master can persistently store the proposed new version number to ensure that it is not reused, in the event the master fails. A slave can persistently store the latest copy of each relevant data item along with its DID and version number. The slave can also be configured to do the necessary caching, such that the slave may have to get the data or protocol every time. This may not be desirable in all cases, but may be allowed in order to handle certain situations that may arise.
A system in accordance with the present invention may further include concurrence restrictions. For instance, certain operations may not be permitted during a two-phase distribution of an update for a given DID over a given scope. Such operations may include a one- or two-phase update, such as a modification of the membership of the scope on the same DID, over a scope with a non-empty intersection.
In at least one embodiment, the master DRS regularly multicasts heartbeats, or packets of information, to the slave DRS on each server in the domain. For each DID, a heartbeat may contain a window of information about the most recent update(s), including each update version number, the scope of the delta with respect to the previous version, and whether the update was committed or aborted. Information about the current version may always be included. Information about older versions can also be used to minimize the amount of traffic back to the master, and not for correctness or liveness.
With the inclusion of older version information in a delta, the slave can commit that portion of the update it was expecting upon the prepare, and ask for a new delta to handle more recent updates. Information about a given version can be included for at least some fixed, configurable number of heartbeats, although rapid-fire updates may cause the window to increase to an unacceptable size. In another embodiment, information about an older version can be discarded once a master determines that all slaves have received the update.
Multicast heartbeats may have several properties to be taken into consideration. These heartbeats can be asynchronous or “one-way”. As a result, by the time a slave responds to a heartbeat, the master may have advanced to a new state. Further, not all slaves may respond at exactly the same time. As such, a master can assume that a slave has no knowledge of its state, and can include that which the delta is intended to update. These heartbeats can also be unreliable, as a slave may miss arbitrary sequences of heartbeats. This can again lead to the inclusion of older version information in the heartbeats. In one embodiment, heartbeats are received by a slave in the order they were sent. For example, a slave may not commit version seven until it has committed version six. The server may wait until it receives six, or it may simply throw out six and commit seven. This ordering may eliminate the possibility for confusion that might be created by versions going backwards.
As mentioned above, the domains may also utilize clustering, as shown in
In the domain diagram 300 of
There can also be more than one domain. In this case, there can be nested domains or “syndicates.” Information can be spread to the domain masters by touching each domain master directly, as each domain master can have the ability to push information to the other domain masters. It may, however, be undesirable to multicast to domain masters.
In one-phase distribution, a master user can make a downcall in order to trigger the distribution of an update. Such a downcall can take the form of:
startOnePhase(DID, newVerNum, scope)
where DID is the ID of the data item or object that was updated, newVerNum is the new version number of the object, and scope is the scope to which the update applies. The master DRS may respond by advancing to the new version number, writing the new number to disk, and including the information in subsequent heartbeats.
When a slave DRS receives a heartbeat, it can determine whether it needs a pull by analyzing the window of information relating to recent updates of interest. If the slave's current version number is within the window and the slave is not in the scope of any of the subsequent committed updates, it can simply advance to the latest version number without pulling any data. This process can include the trivial case where the slave is up-to-date. Otherwise, the slave DRS may make a point-to-point call for a delta from the master DRS, or another similar request, which may take the form of:
createDelta(DID, curVerNum)
where curVerNum is the current number of the slave, which will be sent back to the domain master or cluster master. To handle this request, the master DRS may make an upcall, such as createDelta(curVerNum). This upcall may be made through the appropriate listener in order to obtain the delta and the new version number, and return them to the slave DRS. The new version number should be included, as it may have changed since the slave last received the heartbeat. The delta may only be up to the most recently committed update. Any ongoing two-phase updates may be handled through a separate mechanism. The slave DRS may then make an upcall to the slave user, such as commitOnePhase(newVerNum, delta) and then advance to the new version number.
In order to trigger a two-phase update distribution, the master user can make a downcall, such as startTwoPhase(DID, oldVerNum, newVerNum, delta, scope, timeout), where DID is the ID of the data item or object to be updated, oldVerNum is the previous version number, newVerNum is the new version number (one step from the previous version number), delta is the delta between the successive versions to be pushed, scope is the scope of the update, and timeout is the maximum time-to-live for the job. Because the “prepare” and “commit” are synchronous, it may be desirable to set a specific time limit for the job. The previous version number may be included to that a server on a different version number will not take the delta.
The master DRS in one embodiment goes through all servers in the scope and makes a point-to-point call to each slave DRS, such as prepareTwoPhase(DID, oldVerNum, newVerNum, delta, timeout). The slave can then get the appropriate timeout value. Point-to-point protocol can be used where the delta is large, such as a delta that includes binary code. Smaller updates, which may for example include only minor configuration changes such as modifications of cache size, can be done using the one-phase method. This approach can be used because it may be more important that big changes like application additions get to the servers in a consistent fashion. The master can alternatively go to cluster masters, if they exist, and have the cluster masters make the call. Having the master proxy to the cluster masters can improve system scalability.
In one embodiment, each call to a slave or cluster master produces one of four responses, such as “Unreachable”, “OutOfSync”, “Nak”, and “Ack”, which are handled by the master DRS. If the response is “Unreachable”, the server in question cannot be reached and may be queued for retry. If the response is “OutOfSync”, the server may be queued for retry. In the meantime, the server will attempt to sync itself by using a pull from the master, so that it may receive the delta upon retry. If the response is “NoAck”, or no acknowledgment, the job is aborted. This response may be given when the server cannot accept the job. If the response is “Ack”, no action is taken.
In order to prepare the slaves, a master DRS can call a method such as prepareTwoPhase. Upon receiving a “prepare” request from the master DRS, the slave DRS can first check whether its current version number equals the old version number to be updated. If not, the slave can return an “OutOfSync” response. The slave can then pull a delta from the master DRS as if it had just received a heartbeat. Eventually, the master DRS can retry the prepareTwoPhase. This approach may be more simple than having the master push the delta, but may require careful configuration of the master. The configuring of the master may be needed, as waiting too long for a response can cause the job to timeout. Further, not waiting long enough can lead to additional requests getting an “OutOfSync” response. It may be preferable to trigger the retry upon completion of the pull request from the slave.
If the slave is in sync, the slave can make an upcall to the client layer on the slave side, as deep into the server as possible, such as prepareTwoPhase(newVerNum, delta). The resulting “Ack” or “Nak” that is returned can then be sent to the master DRS. If the response was an “Ack”, the slave can go into a special prepared state. If the response was a “Nak”, the slave can flush any record of the update. If it were to be later committed for some reason, the slave can obtain it as a one-phase distribution, which may then fail.
If the master DRS manages to collect an “Ack” from every server within the timeout period, it can make a commit upcall, such as twoPhaseSucceeded(newVerNum), and advance to the new version number. If the master DRS receives a “Nak” from any server, or if the timeout period expires, the master DRS can make an abort upcall, such as twoPhaseFailed(newVerNum, reason), and leave the version number unchanged. Here, reason is an exception, containing a roll-up of any “Nak” responses. In both cases, the abort/commit information can be included in subsequent heartbeats.
At any time, the master DRS can make a cancel downcall, such as cancelTwoPhase(newVerNum). The master DRS can then handle this call by throwing an exception, if the job is not in progress, or acting as if an abort is to occur.
If a prepared slave DRS gets a heartbeat indicating the new version was committed, the slave DRS can make an upcall, such as commitTwoPhase(newVerNum), and advance to the new version number. If a prepared slave DRS instead gets a heartbeat indicating the new version was aborted, the slave can abort the job. The slave can also abort the job when the slave gets a heartbeat where the window has advanced beyond the new version, the slave gets a new prepareTwoPhase call on the same data item, or the slave times out the job. In such a case, the slave can make an upcall, such as abortTwoPhase(newVerNum), and leave the version number unchanged. This is one way to ensure the proper handling of situations such as where a master server fails after the slaves were prepared but before the slaves commit..
|
https://patents.google.com/patent/US7571215B2/en
|
CC-MAIN-2018-13
|
refinedweb
| 5,921
| 57.5
|
Learning time-lapse of coding the game, the whole thing took around 40 minutes.
This tutorial will use Java to show you how to use the LibGDX framework to get a working game on your android phone and your computer. The great thing about LibGDX is that you code your game and can then export to a variety of platforms, namely PC (Mac, Linux, and Windows), Android, IOS, and HTML5. This tutorial will focus on Android but if you want to deploy to iOS you can code the game following the tutorial, then follow these instructions to deploy to iOS.
I will show you how to get the basic game working on your phone with touch controls. After making this Instructable I also added more features like gesture controls and released it to the Google Play Store, which you can see here.
This is by no means intended to be a straight-up Java tutorial and having some experience with Java will help you greatly here, there are already plenty of excellent Java tutorials online, so do one of those before following this. I do believe that 'doing' is one of the best ways of learning, and putting your Java skills in the context of a real thing can really help you learn, understand, and remember key concepts.
As always, comment questions or message me and I'll do my best to help.
Oh and feel free to offer improvements or tips for my code, I'm by no means an expert and am always willing to learn. And do let me know if I've made any mistakes in the included text files/screenshots/instructions.
Let's jump right in.
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Your Programming Environment and You
Install an IDE
It is definitely worthwhile using an IDE (Integrated Development Environment) to simplify and speed up things. I use IntelliJ IDEA and so the screenshots and IDE-specific instructions will relate to that. Eclipse is a classic IDE you could use instead. If you're new to coding it may be easier to use IntelliJ IDEA to make it easier to follow along, as my instructions are IDEA-specific.
Install the Android SDK
To export your project to Android you'll need the Android SDK which can be found here. Android Studio isn't needed for this tutorial and you can download the SDK separately (scroll down to the bottom), but installing Android Studio will install the SDK for you.
Once you have an IDE and Android Studio/SDK, proceed to the next step to install LibGDX.
Step 2: LibGDX
LibGDX is a cross-platform, open source, fairly well-documented framework for making games. Essentially, LibGDX will handle all the boring boilerplate stuff and free you up to make a game quickly. Without LibGDX you'd have to worry about how to draw an image on a screen at a particular location, along with a plethora of other boring stuff. Not only is LibGDX easy to use, it's easy to set up and install.
Follow this link to LibGDX's download page and download the 'setup app'. Create yourself a folder for all of the Android games you're going to make after this one, and within that create a folder for this project specifically called 'snake'. Put the gdx-setup.jar you downloaded in the outer folder so that it's easy to find for future projects. This jar file will download all the files you need and create a base project, upon which you'll build your game. Go ahead and open it up and you'll see the Project Setup screen.
Setup
- Name: You can choose your own name for this, I just called mine 'Snake'
- Package: A Java package is a way of organising classes, similar to a folder on a computer. By convention, Java programmers name their packages after some internet domain that they own (reversed), for example, com.myWebsite.someProject. Java packages need to be unique to avoid clashes with other Java programmers' work, and since domains are unique they are used to name packages. If you have a website use that, but if you don't, don't worry too much. For a small project, as long as you pick a package name that is probably going to be unique then you'll be fine. Try not to pick one that starts with 'com.' if you don't actually own that website. A package that could be fine would be 'personal.yourNameHere.projectname'. As soon as your indie games studio gets big though, you'll want to start using proper package names.
- Game Class: This is just the name of the main game class. I called mine 'Main', but you can call it what you like.
- Destination: This is where the base game files will be downloaded. Pick the folder that you made for the game.
- Android SDK: The location of the SDK you installed from the previous step. If you installed Android studio, you can find the location by launching Android Studio and going to Configure -> SDK Manager
- Sub Projects: These are the platforms you would like your game to run on. I just chose Desktop and Android, but feel free to tick more if you want to deploy to other platforms.
- Extensions: These are some very useful libraries that will definitely come in useful in your Indie Gaming career. For now, all we need is 'Tools', so check that and leave the others unchecked.
Before generating the project, click Advanced-> and make sure 'IDEA' is checked to automatically generate project files for IntelliJ IDEA (check 'Eclipse' if you are using that). Press Save, then Generate.
(Side note: You can get pretty cheap domains online if you want your own guaranteed package name)
Step 3: Into the IDE
So you have your IDE and SDK installed, the LibGDX files are downloaded and in the right place, it's time to get coding. There are a couple of things to sort out first. Launch your IDE choose 'Open' or 'Open Project'. If you ticked the 'IDEA' or 'Eclipse' option in the previous step, your project files will have been generated. For IDEA, navigate to your game directory and double-click the 'yourGame.ipr' file.
- In the bottom right a box will pop up prompting you 'Import Gradle Project'. Press this, then in the pop up make sure to untick 'Create separate module per source set'. Leave everything else as it is and press ok. You'll see a loading bar at the bottom and give this some time to complete.
- Once this is done, close and re-open the project and you'll get a prompt to update Gradle to 3.3 (at the time of writing). Do this, and you'll see that an 'Android' configuration has been made for you (top of the screen to the left of the green 'Run' arrow).
- Double click the folder icon with the name of your game in the top left to reveal the file browser.
You'll notice a number of folders. The folders for 'android' and 'desktop' (and 'ios' etc if you kept those) contain files to launch the game on a specific platform. For now, we're much more interested in the 'core' folder.
- Double-click folders to open them. Open core->src and you'll see a 'Class' called 'Main' (or whatever you chose to name your Main class). The 'C' icon next to it tells you that it's a class file. This has been automatically generated for you by the LibGDX setup app and has come preloaded with a simple application.
- Double click the Main class in the Core folder and you'll see its code.
I'll explain what this code is doing in the next step, but for now, let's run the simple auto-generated application to see what it's doing. We need to add a 'configuration' for the Desktop Launcher so that we can test out our game on the computer. This will tell the IDE that we need to use the files in the 'Desktop' folder to launch the game when we press the green arrow.
- Click the drop-down box which currently says 'Android', then click 'Edit Configurations'.
- In the Configurations window that has appeared, click the 'plus' in the top left to add a configuration. Select 'Application'.
- Now we tell it that we want to use the Desktop files. In 'Main class' type "your.package.here.desktop.DesktopLauncher". If you open up the 'desktop' folder in the explorer on the left of the IDE you'll see that there's a 'DesktopLauncher' class in the src folder.
- In the 'Use classpath of module' box select 'Desktop'.
- Add a suitable name in the box right up the to, like 'Desktop'.
- Change the working directory to yourGameFolder/android/assets - this folder is where textures and sounds should go
- Click OK
Make sure that 'Desktop' is the currently selected Configuration using the box that used to say Android, then click 'Run'. If all went well, you should see a window appear with a horrible red background and a BadLogic logo. You're ready to start coding!
If you don't see this window, double check that you've followed the steps carefully, and feel free to ask for help by posting your error message below.
Step 4: The Game Loop
Code for this step is provided in the text file and screenshots. Remember to change the package name in the text file if you use it.
LibGDX handles all the hard stuff for you. You just have to write the code that gets executed every 'tick'. You have underlying logic for the game that holds stuff like the position of entities, their health, their name, anything you could think of that needs to be stored and updated. Then to turn that in a game all you need is a bunch of images flying around the screen that represent the logic underneath.
The game loop:
LibGDX calls your 'render' method that you can see in the Main class. Your code in the render method takes some time to execute, images are drawn onto the screen, and when it's done, the engine calls render again. You can make something move by changing its position every time the render method is called. Let's do this now.
Add a member variable x to the Main class:
int x = 0;
In render, increment x:
x++;
Change the image's x position to use your x position:
batch.draw(img, x, 0);
Now every time render is called the image will move slightly to the right. This is because the x position of the image is increased every time the image is drawn.
As for the other provided methods, create() is called when the game is started. This is where the images are loaded and the SpriteBatch is created. Dispose() gets rid of unused resources. It's important to make sure to dispose of unused resources like images, fonts, and sounds when you are done with them. More details on that here if you're interested, but for this project, we won't have any assets to dispose of.
The SpriteBatch basically sends many images to the GPU at once to improve efficiency. More details here if you're interested, but don't worry we won't need to use the SpriteBatch for this game.
Step 5: Screens
We'll be using 'Screens' to implement the game. This is an interface provided for you by LibGDX, and like the Main class, each screen has a create, render, and dispose method, along with some others that you don't have to worry about yet.
You can think of screens as literal screens of a game. You might have one screen for level 1, another for level 2, another for the main menu, another for the options menu etc. For now, let's make a screen for the snake game, called 'GameScreen'.
In the 'core->src->your.package' folder, right click on the package and click New->Class, or right-click on your Main class and click New-> Class. Name this one GameScreen (remember in Java it's conventional to capitalise the first letter of classes). Make this class implement the Screen interface by typing 'implements Screen' after the class name, importing the Screen class from com.badlogic.gdx.screen.
public class GameScreen implements Screen
You should see a red line appear underneath it, saying that you need to implement methods in 'Screen'. Screen is an interface with abstract methods - methods that have no code in them - that you need to implement. Press Ctrl + i, then Enter to automatically add these methods to your GameScreen class. you should see 7 methods appear. The main one we'll add to is the 'render' method, which is called every game tick. It's just like the render method in your Main class, but it has a 'delta' float supplied to it.
Back to Main
In order to use Screens, we need to change something in the Main class. Main currently extends ApplicationAdapter, but we need it to extend Game. All this does is allow us to use Screens, as Game itself extends ApplicationAdapter. Change ApplicationAdapter to Game.
public class Main extends Game
We next need to add super.render() in the Main render() method, which needs to be done now we're extending Game, not ApplicationAdapter. Make it the first thing in the render() method.
Next, we don't actually need to draw anything in the Main class, since this will all happen in the GameScreen class. Because of this, delete everything in the Main render() method except super.render():
public void render () { super.render(); }
Since we're not drawing anything, we don't need the default example image provided in the class. Remove the 'Texture img' state, and any other references to the image.
Finally, we need a way of telling Main which Screen we want to switch to when the game starts. This would normally be a loading screen or the Main Menu, but for now, let's switch to our GameScreen Screen.
Add this at the very end of the create() method.
this.setScreen(new GameScreen());
You can now remove any import statements that are not used (greyed out).
Step 6: Delta
The delta value supplied to your render method by the game engine is the time taken for the last frame to draw, or the time in between the last frame being drawn and this frame to start drawing. It is measured in seconds, and typical values of delta may be around 0.008s.
This is actually an incredibly useful, simple, smart way to make things move at the speed you want them to.
The Problem
Let's imagine that in your render method you tell an image to move one unit to the right. The computer processes your code, calls the render method, in a loop over and over again. If you put this code on a slow computer, it will take longer to process the code, and your image will effectively move slower. If you put it on a fast computer, the image will move faster. What if the image represents your character in the game, shouldn't the character move at the same speed, regardless of the phone the game is running on?
The Solution
Introduce the hero delta, here to save your game from the woes of different computers/phones. Since delta is the time in between this frame and the last frame, you can scale the player's speed by delta to ensure that the player moves at the same speed.
Suppose you want your character to move 5 in-game units in 1 second. In your code, you give the player some state 'int velocity = 5;'.
Distance = speed x time, and we have the speed, and we have the time (delta). In the render method you calculate the distance the player should move:
float distanceToMove = velocity*delta; //example code, you don't need to add this
And you add this to the player's x:
x += distanceToMove; //also example code
Now, if delta increases, distanceToMove increases. This is what should happen, as if the time between frames is slower the player should move more distance between each frame. By multiplying velocity by delta you ensure that player velocity is constant no matter what platform the game is running on.
Confession: We're not actually going to use this technique in the snake game. Instead we're using a grid-based system and using delta as a timer to know when to update the game logic because it's a better fit for snake, but still I think it's really important for you to know this because you'll inevitably go on to build some platformer or top-down game or something and that's where this technique is invaluable.
Ok, let's whack out some quick code to show you delta values. In your render method in the GameScreen class simply put:
System.out.println(delta);
>>Epilepsy Warning<< - If you run the program now you'll see some flashing images because we haven't cleared the screen in the render method yet
Run the program and see the delta values stream onto the screen. Every new delta value is a new game tick. Start to feel the unlimited potential and opportunity to unleash your indie game creativity. You'll notice that the game just shows some weird flashing image. This is because we haven't cleared the screen. Add this to the beginning of the render method to clear the screen:
Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
Now you should just see a black screen when you run it. If you change the first 3 numbers from 0,0,0 to something else (they should be between 0 and 1) you can change the colour of the background. These are RGB values (the last value is alpha, for transparency), use an online colour picker tool to select some colours to try.
Step 7: Relax
Remember to frequently focus on something far away, stand up, stretch your arms and legs, and take a short break. You'll stay healthier and more focused for longer if you take a breather every now and then.
Aight that's enough, back to it.
Step 8: Camera
Last thing to do before we start actually coding the game I promise. We need to set up a camera and viewport so LibGDX knows we're making a 2D game and it knows how to project images up onto the screen. Don't worry if you don't understand this step, I'm not sure I properly understand it myself. Just add this code in the right places and everything will be fine.
In the GameScreen class, add some state for the width and height of the game screen. Since we're making a snake game, a portrait orientation ought to work. Remember state needs to be added inside the class declaration, and above any methods.
private int width = 600; private int height = 1000;
Next add the camera, importing Camera from badLogic:
private OrthographicCamera camera = new OrthographicCamera(width, height);
Add the viewport state - we haven't initialised it yet, also importing from badLogic:
private Viewport viewport;
We need to do something to the camera, then initialise the viewport using that edited camera. For this, we'll need a constructor, which is called whenever you build the class. It's a method like any other but it has no return type, and the name of the method is the name of the class.
public GameScreen() { camera.setToOrtho(false, width, height); viewport = new FitViewport(width, height, camera); viewport.apply(); }
The 'false' in setToOrtho means that y increases as objects move up the screen. Some prefer it the opposite way around. I don't.
A FitViewport ensures that everything is shown on screen. Not all phones will have a 10:6 aspect ratio (our 600 width and 1000 height). If a screen doesn't fit the aspect ratio, the FitViewport will scale the game down until it is all shown on screen, leaving black bars on the top and bottom. A FillViewport stretches to fill the screen, and might be useful as a HUD, but we don't need that for now.
In the render method add this at the beginning:
camera.update(); viewport.apply();
In the resize(int width, int height) method add:
viewport.update(width, height);
This updates the viewport when the game window is resized (for example if you drag the edges to make it bigger on a computer).
This is the basic camera-viewport setup you'll want for any game you make (with a different width and height if you want). Further reading on this can be found here.
Main game reference:
We want to be able to access the batch in our Main class from within the GameScreen class. We can do this by passing a reference to it in the constructor. Change the constructor to take a 'Main game' argument, and add private Main game state.
Add this state at the top:
private Main game;
Initialise it in the constructor:
public gameScreen(Main game) { this.game = game; //etc.... }
Finally go back to the main class and change it to pass a reference of itself to the new GameScreen:
this.setScreen(new GameScreen(this));
Step 9: Finally Actually Coding the Thing!
Time to put a snake on the screen. Let's create a new class called GameState that will hold our underlying logic for the snake game. Add two empty methods, one called update(float delta), and one called draw(). We'll create an instance of this in the GameScreen, and call the update and draw methods from within GameScreen.render.
Back in GameScreen add:
private GameState gameState = new GameState();
as state.
In GameScreen's render method add
gameState.update(delta);
near the top. This will call gameState's update method every tick and give it delta.
Add
gameState.draw();
after the screen clear, and remove the System.out.println while you're at it, we don't need it anymore.
The Board Outline
Let's create a square board for the game to take place on. A 30x30 board is a good choice. Add state to GameState for the board size:
private int boardSize = 30;
We'll need a ShapeRenderer to draw rectangles for us. For now, this is simpler than creating textures and loading them in. Instead, you specify the coordinates of a rectangle you want to be filled. Add this state to GameState:
private ShapeRenderer shapeRenderer = new ShapeRenderer();
We need to specify how far up our board is on the screen. To fill the screen, the board should be 600 units wide, and for a square board 600 units tall. Since we have 1000 units of height, let's put the board 400 units up to give as much space for the touch controls underneath it as possible. To make this adjustable, it's good to have it as state in GameState that can be changed:
private int yOffset = 400;
Now let's draw the board outline itself. To use shapeRenderer we'll need the reference to GameScreen's camera. We'll also need GameScreen's width and height state so we know how big the board should be. To do this, make the draw function take some more arguments:
public void draw(int width, int height, OrthographicCamera camera) {
importing OrthographicCamera from badlogic. Now we need to change GameScreen to pass it this information. In the render method change gameState.draw() to this:
gameState.draw(width, height, camera);
Back in GameState let's set up the ShapeRenderer in the draw method. Add this code in draw:
shapeRenderer.setProjectionMatrix(camera.combined); shapeRenderer.begin(ShapeRenderer.ShapeType.Filled) //rectangle drawing happens here shapeRenderer.end();
This sets up the ShapeRenderer to draw filled shapes in the correct position. To draw rectangles, we just need to add code in between the shapeRenderer.begin and shapeRenderer.end.
Do this now. Add this code in between the begin and end:
shapeRenderer.setColor(0,1,1,1); shapeRenderer.rect(300, 500, 100, 100);
To set the colour to yellow and draw a rectangle on the screen. If you run the game now you should see the rectangle appear.
The easiest way to draw the border is to draw two rectangles, one slightly bigger than the other:
Remove the old rectangle and add these:
shapeRenderer.setColor(1,1,1,1); shapeRenderer.rect(0, yOffset, width, width); shapeRenderer.setColor(0,0,0,1); shapeRenderer.rect(0+5, yOffset+5, width-5*2, width-5*2);
You should now see a board outline if the second rectangle colour is identical to the background clear colour you chose in the GameScreen class. I'm just using black for the retro appeal.
Step 10: The Snake Itself
We're going to be using a Queue to represent the Snake. Specifically, a queue of Bodypart objects, each holding an x and a y.
A queue is a data type where you can add things to the end, and take things off the front. It has a FIFO (first in first out) order, so the earlier you add something the earlier it gets taken out. Think of it as a queue in a shop. Customers come off the front of the queue to get served, and new customers get added to the end.
Using a queue as a snake gets rid of annoying issues like making the body follow the snake's path or trying to remember where the snake has been. To advance the snake, we just add a 'Bodypart' to the front and take one off the end. Easy.
Bodyparts
First, create the Bodypart class. Give it two pieces of state for x and y, and a constructor which takes three values, x,y, and boardSize.
private int x; private int y; public Bodypart(int x, int y, int boardSize) {}
By pressing Ctrl+N (Cmd N on Mac) you can automatically create a 'getter' for x and y so that we can access the coordinates safely from outside the class.
public int getX() { return x; } public int getY() { return y; }
This version of Snake is going to wrap around the edges, but it's easy to edit the code to make you die once you reach the edge. When we advance the snake we add a Bodypart in the direction the snake is going. If the Bodypart is outside the boardSize, we need to 'wrap around' using modular arithmetic.
Add this code in the constructor. The constructor here allows us to do input sanitisation, and makes it impossible to have a bodypart outside the board. The if statements are needed for negative x and y values.
this.x = x % boardSize; if (this.x<0) this.x += boardSize; this.y = y % boardSize; if (this.y<0) this.y += boardSize;
The Queue
Now go back to GameState and add some Queue state.
private Queue<mBody> = new Queue<>();
Make sure to import Queue from badlogic and not java.util.
We want to start the game with 3 Bodyparts so let's create a constructor for GameState and add them.
public GameState() { mBody.addLast(new Bodypart(15,15,boardSize)); //head mBody.addLast(new Bodypart(15,14,boardSize)); mBody.addLast(new Bodypart(15,13,boardSize)); //tail }
Now we need to create a way to draw all Bodyparts in the right place. We use our ShapeRenderer to draw the Snake.
Important: We need to put this code after the code to draw the board so that the snake draws on top of the board.
Remember to set the colour to white (or any visible colour) first:
float scaleSnake = width/boardSize; for (Bodypart bp : mBody) { shapeRenderer.rect(bp.getX()*scaleSnake, bp.getY()*scaleSnake + yOffset, scaleSnake, scaleSnake); }
The loop iterates through all Bodyparts and draws them, scaled by the scaleSnake factor which converts from the board size to the screen size.
If you run the game now you should see a 3-square-long snake on the board. Resize the window and the board should stay square.
Step 11: Make It Move!
This project's going to come together really fast now. You've done the work, time to reap the reward.
We're going to use the update method in GameState to add to the queue and take away from it. First, we need a way to limit the speed of the snake. If we add one and take one away every tick it's gonna fly across the screen faster than you can imagine. Let's make a simple timer using delta.
Add state to GameState for the timer:
private float mTimer = 0;
Now in the update method, we need to add delta to the timer every tick:
mTimer += delta;
After a preset time period, we want to reset the timer and advance the snake (update the game logic). Using a simple if statement:
if (mTimer > 0.13f) { mTimer = 0; advance(); }
And we need to create an advance() method that adds one to the snake and takes one off the end. This can be private since only GameState should call it:
private void advance() { mBody.addFirst(new Bodypart(mBody.first().getX(), mBody.first().getY()+1, boardSize)); mBody.removeLast(); }
Press Play and your snake should be moving! By changing the number in mTimer > 0.13f you can change the speed of the snake.
Next, we need a way to change direction.
Step 12: Controls
We need to create a controls class. This allows us to filter controls, for example not letting users press buttons they shouldn't, and makes it easy to rebind controls. For now, we'll use the keyboard but we'll add some touch buttons later, which will be easier with a separate controls class. Using a controls class we can make sure players don't turn back on themselves, i.e. they can't move down if they're moving up.
The Controls class will need state for the current direction and the next direction. The Snake only updates once every 0.13 seconds, which means that if the user presses two buttons in that time, we want the most recent press to become the next direction. We also need the current direction to know whether or not to allow a user to press a button.
Let's store the direction as an integer: 0,1,2,3 for up,right,down,left respectively (starting at up and going clockwise).
I really should have set some private final static int state here so that instead of, for example setting the direction to 0 for up, you set it to UP where UP = 0 like this:
private final static int UP = 0;
This would make the code much more readable, and less error-prone. I would recommend doing this for all four directions.
Controls
Add this state to Controls:
private int currentDirection; //0,1,2,3 U,R,D,L private int nextDirection;
We also need these methods:
public int getDirection() {}
public void update() {}
update() needs to be called every tick to poll the keyboard for keypresses. getDirection needs to be called every time advance() is called (every time the timer hits 0.13) so that the snake knows which direction it should be going in. Let's add the keypresses in the update() method:
In the if statements we also check the currentDirection to see if the desired keypress is allowed. For example, if the snake is moving up, it shouldn't be able to immediately move down or it would eat itself instantly. The only options really are left and right.
if(Gdx.input.isKeyPressed(Input.Keys.UP) && currentDirection != 2) nextDirection = 0; else if (Gdx.input.isKeyPressed(Input.Keys.RIGHT) && currentDirection != 3) nextDirection = 1; else if (Gdx.input.isKeyPressed(Input.Keys.DOWN) && currentDirection != 0) nextDirection = 2; else if (Gdx.input.isKeyPressed(Input.Keys.LEFT) && currentDirection != 1) nextDirection =3;
Next, add this in getDirection:
currentDirection = nextDirection; return nextDirection;
It returns the direction so that GameState knows where to put the next body part, and sets the currentDirection to the direction it just gave to GameState.
LibGDX also supports event-based input detection rather than poll-based, which you can read about here.
GameState
Go back to the GameState class so that we can make the snake change direction. We need to create an instance of the Controls class first. Add this state:
private Controls controls = new Controls();
We need to update the controls every tick so we add this to the update method:
controls.update();
Next, we'll change our advance() method. Change the code that's already in there to this:
private void advance() { int headX = mBody.first().getX(); int headY = mBody.first().getY(); switch(controls.getDirection()) { case 0: //up mBody.addFirst(new Bodypart(headX, headY+1, boardSize)); break; case 1: //right mBody.addFirst(new Bodypart(headX+1, headY, boardSize)); break; case 2: //down mBody.addFirst(new Bodypart(headX, headY-1, boardSize)); break; case 3: //left mBody.addFirst(new Bodypart(headX-1, headY, boardSize)); break; default://should never happen mBody.addFirst(new Bodypart(headX, headY+1, boardSize)); break; } mBody.removeLast(); }
Switch statements are like a series of if statements. Instead of writing if(1) else if (2) else if(3).. etc. we can just use a switch statement to clean things up a bit.
Run the game and you should be able to move the snake around with the arrow keys. Time to add food.
Step 13: Food
The Food Class
Making food appear is quite simple. Create yourself a 'Food' class.
We need to be able to get the x and y location of the food so we can draw it, and we need to be able to randomise the food's position.
So let's add some state for x and y:
private int x; private int y;
And we need getters for these, press Ctrl+N (Cmd N):
public int getX() { return x; } public int getY() {return y; }
To randomise the position of the food, it needs to know the boardSize. Let's use LibGDX's own MathUtils random method, which simplifies generating a random integer between two points. MathUtils.random(int range) provides a random number between 0 and range inclusive: (import MathUtils from badlogic)
public void randomisePos(int boardSize) { x = MathUtils.random(boardSize-1); y = MathUtils.random(boardSize-1); }
Finally, we need a constructor to create the food and randomise its position. We need to give the constructor a board size so it knows where it can legally initially place the food:
public Food(int boardSize) { randomisePos(boardSize); }
And we are done. Let's head back to GameState and create the food object, and find a way to draw it.
GameState
Create and initialise state for the food with the other state (but underneath boardSize):
private Food mFood = new Food(boardSize);
In the draw method add this to draw the food. Make sure it is under the board outline so it get's drawn on top, and make sure the colour is set to white or some other visible colour. We can use the same scaleSnake factor to draw our food so make sure this gets put under the scaleSnake initialisation.
shapeRenderer.rect(mFood.getX() * scaleSnake, mFood.getY()*scaleSnake + yOffset, scaleSnake, scaleSnake);
If you run the game now you should see the food appear in a random place.
Time to add the ability to eat the food.
Eating Food
Every time the snake advances we need to check if its head is in the same location as the food. If it is, we add one to the snake's length and randomise the food position.
We need some state to hold the current length of the snake. Add this as state:
private int snakeLength = 3;
In advance, underneath the switch statement, we'll detect if the newly placed head is touching the food. If it is, we increment the snakeLength and randomise the food position:
if (mBody.first().getX() == mFood.getX() && mBody.first().getY() == mFood.getY()) { snakeLength++; mFood.randomisePos(boardSize); }
This is fine but the snake doesn't ever get any longer because we always remove the tail. We need to detect if the current length of the snake is less than the 'snakeLength' variable, and if it is, we do not remove one from the tail. In other words, we only remove the last element if the current size of the snake is equal to 'snakeLength'.
if (mBody.size - 1 == snakeLength) { mBody.removeLast(); }
You'll notice that we use 'mBody.size - 1'. This is because this statement comes after the switch statement that adds one to the head, so we need to account for that extra bodypart.
Press play and you should be able to eat food and get longer.
Step 14: Death
It's not much of a game yet. There's no challenge if there's no way to fail, so let's add a fail condition.
If the head touches another part of the body, the length of the snake should reset to 3. This code should be put in the advance() method, which is called everytime the snake moves forward.
for (int i = 1; i<mbody.size; i++) { if (mBody.get(i).getX() == mBody.first().getX() && mBody.get(i).getY() == mBody.first().getY()) { snakeLength = 3; } }
Our mBody Queue uses zero-based indexing. We initialise int i as 1 so that it doesn't check if the xy of the head matches the xy of the head, as this would always be true.
We need to add this just before the if() statement that checks removes the last bodyPart from the Queue so that if the player has 'died' we can remove all extra bodyParts in the queue. We do this first by changing == to >=, because it is now possible for the actual body length to be greater than the desired length (when the snake dies). We then need to change the if statement to a while statement, which will keep removing body parts until the snake reaches length 3. Without the 'while', only one bodyPart would be removed per advance method, which doesn't actually shorten the snake at all.
while (mBody.size - 1 >= snakeLength) { mBody.removeLast(); }
Press play and test out the functioning snake game!
Congratulations on the start of your lucrative indie career. From here on out, we'll be adding some touch controls, colours, and then deploying the game to Android.
Step 15: Touch Controls
Let's just implement some simple touch controls using a DPad, which we'll draw onto the screen using rectangles.
Controls
We need to change our update() method in the Controls class to take a Viewport argument. The viewport is needed to convert screen coordinates of a touch to in-game coordinates. Add the Viewport argument:
public void update(Viewport viewport) {
Unfortunately, our viewport is in the GameScreen, but controls.update happens in GameState. An easy way to get around this would be to pass the viewport to our gameState.update(), then pass it down to controls.update().
GameScreen
Add viewport to the gameState.update arguments:
gameState.update(delta, viewport);
GameState
Add viewport to the update arguments, then pass it to controls.update():
public void update(float delta, Viewport viewport) { //update game logic mTimer += delta; controls.update(viewport); if (mTimer > 0.13f) { mTimer = 0; advance(); } }
Controls
Back to the controls class, let's add some touch detection. On a computer, you click the screen to send a touch, but on phones, you touch the screen.
First, add some Vector2 state to Controls:
private Vector2 touch = new Vector2();
This will store the two coordinates of the touch.
We use this code to detect touches:
if (Gdx.input.isTouched()) { touch.x = Gdx.input.getX(); touch.y = Gdx.input.getY(); viewport.unproject(touch); }
If the screen is touched, it sets the touch vector to the coordinates of the touch. It then 'unprojects' the touch using the viewport. This just converts the screen coordinates of the phone/computer to the in-game coordinates.
We have touch coordinates, we just need a way to make some buttons. Putting buttons in their own class is probably a good idea for larger projects, but for now, we can just put buttons in the Controls class. We'll describe the buttons as Rectangles, and we can use the Rectangle.contains method to check if the touch Vector is within the Rectangle.
We need four Rectangles, one for each direction. Add this as state:
private Rectangle upBox = new Rectangle(235, 265, 130, 130); private Rectangle downBox = new Rectangle(235, 5, 130, 130); private Rectangle leftBox = new Rectangle(65,135,130,130); private Rectangle rightBox = new Rectangle(365,135,130,130);
Now we just need to check if each of these boxes contains the touch. To do this we just add to our current keyboard-detection if statements. Watch your brackets. Make sure the OR statement is enclosed.
if ((Gdx.input.isKeyPressed(Input.Keys.UP) || upBox.contains(touch)) && currentDirection != 2) nextDirection = 0; else if ((Gdx.input.isKeyPressed(Input.Keys.RIGHT) || rightBox.contains(touch)) && currentDirection != 3) nextDirection = 1; else if ((Gdx.input.isKeyPressed(Input.Keys.DOWN) || downBox.contains(touch)) && currentDirection != 0) nextDirection = 2; else if ((Gdx.input.isKeyPressed(Input.Keys.LEFT) || leftBox.contains(touch)) && currentDirection != 1) nextDirection =3;
GameState
Finally, we need to draw the buttons using our ShapeRenderer. Add these after the colour has been changed from the background colour:
//buttons shapeRenderer.rect(235, 265, 130, 135); shapeRenderer.rect(235, 0, 130, 135); shapeRenderer.rect(105,135,130,130); shapeRenderer.rect(365,135,130,130);
(I know these aren't perfectly square but 3 doesn't go into 400 ok)
Now we have a functional game that would run on Android! Run it and make sure you can click the buttons to change direction. Now let's add some colour to the snake.
Step 16: Pretty Colours
Making the snake and controls a little more colourful is pretty simple. All we need to do is make the rgb value depend on the sine function.
GameState
Add this state to GameState:
private float colourCounter = 0;
Then in the update() method we increment the colour counter with delta:
colourCounter += delta;
Finally, in the draw() method we need to change the shapeRenderer colour to depend on the colourCounter:
shapeRenderer.setColor(MathUtils.sin(colourCounter),-MathUtils.sin(colourCounter),1,1);
Play around with this using sin and cos to come up with something you like. Be careful of all three rgb values being 0 at once, because then the snake might go invisible (if your background is black). You can use Math.abs() to turn a negative value into a positive one (because sin and cos go negative half the time). You can also add a scaled amount of delta to colourCounter if you want the colours to switch faster or slower, e.g. colourCounter += 2*delta. Get creative, and remember you can change the colour of the controls and board outline with this method too.
Step 17: Android Settings
There are a couple of things that need to be changed to make the game run properly on a phone.
Screen Orientation:
LibGDX defaults to a landscape orientation but we want portrait. In the file explorer on the left open android->AndroidManifest.xml. Look for the android:screenOrientation line and change it to portrait:
android:screenOrientation="portrait"
Then open src->your.package->AndroidLauncher.java and add this line to hide the status bar and navigation buttons:
config.useImmersiveMode = true;
This should come in-between the initialisation of the config variable and the initialize() method.
Step 18: Test It Out
Before exporting to Android it's a good idea to test it out on a virtual phone. Luckily it's pretty easy to do that from inside IntelliJ itself.
At the top, change the configuration from 'Desktop' to 'Android'. The Android configuration should have automatically been created for you. If it hasn't, follow these steps to create it yourself:
Skip this part if you already have the Android Configuration:
- Click desktop -> edit configuration
- Click on the plus icon in the top left, then 'Android App'
- Name it 'android'
- Change the module to 'android'
The Android Emulator:
Press the run button while the Android config is selected and you should see a 'Select Deployment Target' window pop up. We need to create a new virtual device, so press that button. You can choose any of these phones you like but I used the Nexus 5. Click next, next, finish, leaving all the options as they are. Now select your chosen phone in the original window and press ok.
You should see a virtual phone appear. If it is landscape, you can use the rotation buttons on the right to change its orientation. It runs a working version of Android, and you can use the touchscreen with the mouse. Give it a short while to load, and your snake game will automatically be installed and launched. Check that the game works as it should, i.e. everything is visible, it's in the right orientation, the buttons work, etc.
If everything is fine, you're ready to export!
Step 19: Exporting
We'll do this from the command line, although there are some Eclipse-specific instructions here. Open up your command line and navigate to the root folder of your project - the folder that contains the android, build, core, desktop, and gradle folders. Now execute the following code to package the project:
./gradlew android:assembleRelease
This will create an unsigned apk, which will work but your phone will need to have APK source checking disabled to install it. Let's sign the app so you and others can install it more easily.
There's a lot of info here about signing. I'll walk you through doing it via the command line.
The simple way to do this is copy across the required files from the Android SDK to your build folder. Open two file explorers. In one of them navigate to yourProject->android->build->outputs->apk. You should see an unsigned apk in there from the previous code. In the other window navigate to the location of the Android SDK. For me this was in users->myUser->Library->android->sdk.
Then navigate to build-tools->27.0.2. In this folder there should be many executables, we're interested in apksigner and zipalign. Copy across the apksigner and zipalign execs, and also copy across the lib folder because the execs depend on it.
Go back to the command line and navigate to the apk folder (android/build/outputs/apk). First we need to align the zip using:
./zipalign -v -p 4 android-release-unsigned.apk android-unsigned-aligned.apk
Run this code and you'll see another apk appear, this one aligned.
To sign the apk we need to create a certificate. This makes sure that if you update the app, you are the real developer because you've signed it with your certificate. Because of this, you need to keep your certificate safe and private.
We can use keytool to generate a certificate. This comes with the Java SDK, so you should just be able to run it from the command line.
keytool -genkey -v -keystore chooseName.jks -keyalg RSA -keysize 2048 -validity 10000 -alias chooseName2
Run this code from the apk folder, replacing chooseName and chooseName2 with names of your choice. chooseName.jks is the file name, and chooseName2 is the alias.
Press enter and answer the questions it asks, setting a strong password. You should see yourKey.jks appear in the apk folder. Now we can sign the apk.
Run this in the command line, changing yourName to the name of the keystore you just generated:
./apksigner sign --ks yourName.jks --out release.apk android-unsigned-aligned.apk
Finally, check that the apk signed properly using:
./apksigner verify release.apk
If nothing happens when you press enter, you're done! You can send the apk to your phone and install it now. You'll need to allow installation from unknown sources in your phone's options menu. This was under the security settings for me.
Step 20: And We Are Done - What Next
The journey was long and arduous, but you now have a working Android app running on your phone. This is just the start. There's much more to learn about LibGDX and much more you can add to this game.
Things to Explore:
- Textures - using images instead of the ShapeRenderer
- AssetManager - helps when you have many textures and sounds to keep track of
- Texture Atlas - improves performance when you have many textures
- Sounds
- Menu screens
- Score and High Score - using 'preferences'
- Gesture Controls
Download my version of the Snake app on Android here to get an idea of things to add to make this app better.
And there's much more. As a starting point, here are two tutorials you should work through and some helpful pages:
And finally a link to the Official Documentation.
Get creative. Build something cool.
~Keir
Participated in the
Epilog Challenge 9
daryllukas made it!
10 Discussions
2 months ago on Step 2
can you help me?
Question 12 months ago on Step 2
My libGDX somehow doesn't work.
Generating app in C:\Users\Buzivagyok\Desktop\snk
Executing 'C:\Users\Buzivagyok\Desktop\snk/gradlew.bat clean --no-daemon idea'
To honour the JVM settings for this build a new JVM will be forked. Please consider using the daemon:...
Daemon will be stopped at the end of the build stopping after processing
WARNING: Configuration 'compile' is obsolete and has been replaced with 'implementation'.
It will be removed at the end of 2018
:android:clean UP-TO-DATE
:core:clean UP-TO-DATE
:desktop:clean UP-TO-DATE
:ideaModule
:ideaProject
:ideaWorkspace
:idea
:android:ideaModule
:android:idea
:core:ideaModule
:core:idea
:desktop:ideaModule
:desktop:idea
Deprecated Gradle features were used in this build, making it incompatible with Gradle 5.0.
See...
BUILD SUCCESSFUL in 1m 25s
9 actionable tasks: 6 executed, 3 up-to-date
Done!
To import in Eclipse: File -> Import -> General -> Existing Projects into Workspace
To import to Intellij IDEA: File -> Open -> YourProject.ipr
Any idea?
Answer 11 months ago
Have you tried importing this into the IDE? If you set up the Desktop Application in further steps, does it run and show the badlogic logo?
Reply 11 months ago
No it doesn't. I get an error: WARNING: Configuration 'compile' is obsolete and has been replaced with 'implementation'.
Reply 11 months ago
Try this
You might need to try adding that code to the project's build.gradle
Reply 11 months ago
If I start the desktop application i get the: Could not execute build using Gradle distribution '' error.
Reply 11 months ago
Hmm, what gradle version are you using?
Reply 11 months ago
I am really a newbie. Is it how i should check it?
Reply 11 months ago
I am really a newbie. How do i check that?
1 year ago
The snake game is addictive and I can't believe you did this in 40 minutes!
|
https://www.instructables.com/id/How-to-Make-an-Android-Game-Snake/
|
CC-MAIN-2019-35
|
refinedweb
| 8,567
| 72.66
|
On the event handlers page, we looked at various ways to make a control movieclip respond to mouseovers and mouse rollouts. In the bee game, we used the onRelease event of the controls to make movieclips start and stop, using the play and stop commands. Here we'll take a look at another use of the onRelease function to make a photo in a slider movieclip ease into place when the control is clicked. You can get oahumap_start.fla if you want to follow along -- it has all the jpgs and gifs used in the movie in the library and some objects like the mask and photo slider set up on stage.
When your code gets to be over about 30 lines or so, as it will be in this example, it becomes easier to keep the code in one file and the fla in another. So the first step here is to create a new Actionscript file (File, New, Actionscript file), where you'll type all your code. Save that file as oahumap_slider.as. Back in the fla, open the Actions panel, click frame 1, and type this into the Actions panel:
#include "oahumap_slider.as"
Now whenever you publish the fla, all of the code in oahumap_slider.as will be included in it, as though you had typed it where you put the #include. Notice that you cannot change the as file and cause any changes to happen at runtime; it is only for changing code that will be compiled into the final swf.
To begin with, change the registration point of one of the dots on stage to a center registration point, as described here. Remember that changing one of them will cause them all to change. After making that change, go back to scene 1 (main movie) and reposition the dots to their original position. Give each one an instance name (clockwise from top): p0, p1, p2, p3, p4. Add a layer in the movie and call it actions. Put this code into the function definition section of oahumap_slider.as (it will have all the same code sections we use in any Flash movie) to make the p0 (Sunset Beach) control respond to the mouse by growing and shrinking while the mouse is over it and returning to normal size when moved off:
// define pulsating rollover function for dots: nake them grow to 130% of original size, // then shrink back down to 100%, then grow to 130%, etc (forever til rollout) function makePulse() { // variable only used by this function, so declare inside function var growing:Boolean = true; // in every frame (at the frame rate of the movie), check to see if the clip is growing this.onEnterFrame = function() { if (growing) { // if it is, check to see if it has reached 130% of its original size if (this._xscale < 130) { // if it hasn't, keep growing this._xscale += 10; this._yscale += 10; } else { // if it has, then indicate that it should no longer grow growing = false; } } else { // if the movieclip is not growing, it should be shrinking so // check to see if it has gone back to its original size yet if (this._xscale > 100) { // if not, keep shrinking it this._xscale -= 10; this._yscale -= 10; } else { // if it has, then stop it shrinking and start it growing again growing = true; } } }; } // define rollout function for dots (set to original size, stop growing/shrinking) function stopPulse() { this._xscale = 100; this._yscale = 100; delete this.onEnterFrame; } // assign function to event properties in init function init() { p0.onRollOver = makePulse; p0.onRollOut = stopPulse; } init();
Go back to the fla and test the movie to see if the code works when you roll over and off the sunset beach control. Of course, you don't have to do all the pulsating with code. If you'd rather do it with a tween sequence inside a movieclip, and then put that movieclip into the _over frame of your control movieclip, or into another labelled frame that you gotoAndStop to onRollOver, you can do that instead. There is never only one way to do things in Flash, and often even no 'absolutely best' way to do things. That said, I've included the code for making a pulsating control above in case you want to see an example of it.
To assign the same functionality to all controls in the movie, change the init function to:
function init() { p0.onRollOver = p1.onRollOver = p2.onRollOver = p3.onRollOver = p4.onRollOver = makePulse; p0.onRollOut = p1.onRollOut = p2.onRollOut = p3.onRollOut = p4.onRollOut = stopPulse; }
Test the movie and make sure all controls respond in the same way to mouseovers.
Before writing any more code, we need to set up the slider movieclip with individual photo movieclips in it. Convert the white strip on stage to a movieclip with an upper left registration point. That's important, because we'll be sliding that clip back and forth and need to know what point we're addressing in code. Give this movieclip an instance name slider_mc. Now go in and edit slider_mc, adding a new layer and copying each photo from the library to that layer and sizing it to 215 x 143. The first picture (hawaiian flower) doesn't need to be a movieclip, but the rest do -- convert each photo to a movieclip and give it an instance name pic0, pic1, pic2, pic3, pic4 (where the number is the same as the number used for the corresponding control).
Each photo movieclip should also have an upper left registration point for the code to work correctly, and be spaced at least 10 pixels apart on the slider. (If you're working in Flash MX 2004, while editing the movieclip you may have to Edit, Select All and shift all the contents to the left to be able to access the far right end of the slider. When done editing, Edit, Select All again and set x=0 in the Properties panel. This problem has been fixed in Flash 8 -- as you add more things to the right end of the stage, Flash changes the slider to allow you to access that part of the stage.)
Choosing the p0 (Sunset Beach) control again, we'll write an onRelease function that will cause the slider to move from wherever it currently is to a new position which will enable pic0 (Sunset Beach picture) to show through the mask, surrounded by a 10-pixel margin. We'll use a moviewide variable SLIDERSTART (whose capital letters indicate that this is actually a constant, a variable whose value is not in fact variable throughout the movie, but is set up to provide a central place to store this information -- easy to find and change if the value has to be changed later) to indicate the x position of slider when the first picture is lined up under the mask.
Finally, we'll use two new classes that were introduced in Flash MX 2004, Tween and Easing, to actually move the slider, using an option to easeOut (decelerate) the slider and let the photo sort of glide into place.
The general format for applying a Tween is:.
Tween types (from this page at macromedia.com):
Use of the Tween and easing classes require that their class files be imported, so we add this code to the start of frame 1:
import mx.transitions.Tween; import mx.transitions.easing.*; var SLIDERSTART:Number = slider_mc._x; var MARGIN:Number = 10;
(where SLIDERSTART and MARGIN are constants defining the initial position of the slider and the margin between pictures, respectively)
The other piece of information we need to find out or calculate is: what is the x position of slider_mc that will enable the Sunset Beach photo (pic0) to show through the mask? There are two ways to find that out:
Since our goal is to set up some kind of tween that we can apply to all photos in the slider, it would be better to go with option 2, a mathematical calculation that can be applied to all photos. This is the code to calculate the new position of slider_mc to make pic0 within it show through the mask, and then tween slider_mc accordingly when p0 is clicked:
function slideToSunsetBeach() { var newpos:Number = SLIDERSTART - slider_mc.pic0._x + MARGIN; new Tween(slider_mc, "_x", Strong.easeOut, slider_mc._x, newpos, 2, true); } p0.onRelease = slideToSunsetBeach;
Of course, you can try substituting some of the other easing methods for Strong.easeOut to see their different effects. The new Tween line causes the slider to be moved from wherever it is currently (slider_mc._x) to the new position (newpos) we calculated.
As we did we with the rollover/rollout code, we need to make this onrelease code generalized so that it can be applied to all the controls in the movie. But within the onrelease function above, we refer to the pic0 movieclip by name, so it will not be applicable to other controls in its current form. We need to find a generic way to specify the picture movieclip relative to the control that was clicked, so we can refer to it generically.
There are, generally speaking, 3 different ways we could programmatically link a button movieclip like p0 to a corresponding photo movieclip like slider_mc.pic0:
In this example, we'll go with the third option (as of 24 Feb 2006), since that is the most extensible if you want to add more controls and photos later. Eg, in the example we've been looking at, the control is named p0 and the picture (controlled) movieclip is named pic0. All the other control/picture pairs are named similarly. Thus, we can use the the rule mentioned in the yellow box here about the two ways to access the properties of an object (in this case, a movieclip) to specify the corresponding photo and assign that to a property of the control.
If we call that property myphoto (we can call it anything we want), this is what the slideToPhoto function will now look like:
function slideToPhoto() { var newpos:Number = SLIDERSTART - this.myphoto._x + MARGIN; new Tween(slider_mc, "_x", Strong.easeOut, slider_mc._x, newpos, 2, true); }
That is, instead of using the hardcoded name of the movieclip, slider_mc.pic0._x to get the location to slide to, we'll use a property of the control movieclip to specify that generically, with this.myphoto._x.
So how does the myphoto property get assigned to the control? We'll put the code to do that assignment into the init function, which will also assign the event handlers for onRollOver, onRollOut, and onRelease for the controls. In order to come up with the right code, one needs to make use of the fact that a
movieclip may be accessed either as
slider_mc.pic0
or as
slider_mc["pic0"]
or as
slider_mc["pic" + "0"]
or as
slider_mc["pic" + i] if i is a String variable whose value is "0". (i can actually also be a variable of type Number and Flash will automatically convert it to a String in order to append it to another String).
That is:
You can access a movieclip (either within another holder movieclip or on the main timeline) as either
movieclip holder name . movieclip name
or as
movieclip holder name [ a string representation of the movieclip name ]
where the string representation of the movieclip name can be either a single string (a bunch of characters enclosed by single or double quotes), a concatenation of strings, a variable of type String, or any concatenation of strings and variables of type String.
Strings are concatenated by putting + between them.
If the movieclip you want to access is on the main timeline and your code is also on the main timeline, you can use the keyword this in place of the movieclip holder name.
So, this is the init function which will make all the assignments for us, including setting the myphoto property::
function init() { // use a loop to set up everything at once. for (var i:Number=0; i < 5; i++) { // create property myphoto to point to the corresponding photo movieclip this["p" + i].myphoto = slider_mc["pic"+i]; // assign handler functions to each event property of the control this["p" + i].onRollOver = makePulse; this["p" + i].onRollOut = stopPulse; this["p" + i].onRelease = slideToPhoto; } }
Don't forget to add a call to init at the end of the program. Also, as a final step in the movie shown above, I added a title and instructions. You can open oahumap_slider.fla and oahumap_slider.as (link under Files at right) to see the final file. For a nice rendition of the slider with compact code, see David Manchester's sample listed under Student Samples at right.
On the next page, we'll look at how to indicate which control was selected, and show a caption under each photo after it has slid into place.
last update: 21 Mar 2006
Discussed on this page:
keeping fla and actionscript separate, as file, button pulse on mouseover, Tween class, slider, easing, make generic function, why upper left registration point, addressing movieclips by dot syntax or string representation of name, ways to link a control with a particular movieclip
Files:
oahumap_start.fla
(free download)
oahumap_slider.fla
oahumap_slider
A nice compact version of the code described on this page may be seen in David Manchester's Count With Me slider (published to Flash 8)
|
http://www.flash-creations.com/notes/actionscript_easingslider.php
|
crawl-001
|
refinedweb
| 2,232
| 66.98
|
On Tue, Nov 3, 2009 at 4:46 PM, Steven D'Aprano <steve at pearwood.info> wrote: > def pick_two_cards(hand): > assert isinstance(hand, (set, frozenset)) > assert len(hand) == 5 > return (hand.pick(), hand.pick()) > Even if pick() chose random, you still might end up picking the same card twice. Is that really what you intended? FWIW, I've been working on an extension module that supplies a "sortedset" type [1]. In most ways, it's similar to a set except it's indexable like a list. The items are kept in sorted order, so index 0 is always the lowest item, index 1 is the next-to-lowest, etc. Because they're indexable, it's easy and efficient to retrieve random elements using the standard library's "random" module. With the sortedset type, that function would become: def pick_two_cards(hand): assert isinstance(hand, (set, frozenset)) assert len(hand) == 5 return random.choice(hand), random.choice(hand) or if you want to avoid duplicates: return random.sample(hand, 2) Would something like that fit your needs? [1] It's already implemented, along with sortedlist, weaksortedlist, and weaksortedset types. I'm just putting them through the paces in my own products before releasing them. -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC <> -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
|
https://mail.python.org/pipermail/python-dev/2009-November/093681.html
|
CC-MAIN-2016-40
|
refinedweb
| 218
| 58.58
|
User Name:
Published: 01 Mar 2011
By: Xianzhong Zhu
Download Sample Code
In this article I will introduce to you how to handle XML data in Windows Phone 7 for Silverlight applications..
The sample test environments in this article involve:
1. Windows 7;
2. .NET 4.0;
3. Visual Studio 2010;
4. Windows Phone Developer Tools RTW;
5. Silverlight for Windows Phone Toolkit at codeplex ().
XML is a typical and important data form in Windows Phone applications. In this section, we'll discuss a simple yet typical case associated with XML- load and render a local resource-formed XML file. First of all, let's create a sample XML file (or you can select an existing one).
Start up Visual Studio 2010 and create a general Windows Phone 7 sample project named WP7XML. Then, right click the sample project and select "Add | New Item..." to open the "Add New Item" dialog box. In the dialog select the "XML File" template to create an empty XML file, naming it bookstore.xml.
Note after adding the XML file we need to change its related Build Action property to Resource in the file properties (see Figure 1). The related explanation is given later on.
The following indicates the final contents of the XML document bookstore.xml:
It is important to distinguish the difference between setting the Build Action property of an asset to Content and to Resource in a Windows Phone application. If you set the Build Action property of an asset to Content, it means Visual Studio will copy the related files into the application directory when the program is built. Thus, adding content has no effect on the size of the program itself, although of course when a program is running and loads content it will use more memory. A Resource item is stored in the program assembly. When a Windows Phone device loads a program it looks through the entire program assembly and does some checks to make sure that it is legal code. So, you should be careful about putting too many things in the program assembly.
Till now, we've created a simple and well-formed XML file. Next, we will continue to learn how to read it out and render it onto the screen.
In my previous article at dotnetslackers we mentioned the Application class, which provides a typical site for global data storage. In this case, we are going to use the GetResourceStream method in this class to load the contents of the XML file into a StreamResourceInfo object. The related code is as follows (the sample page is LocalXMLDealingPage.xaml):
GetResourceStream
First, to use the namespace System.Xml.Linq we need to manually add reference to the assembly System.Xml.Linq.dll. Second, pay attention to the path in the Uri class "/WP7XML;component/bookstore.xml" (the same form as that in stand Silverlight). In this case, I use Linq to XML to grab the node information in the XML file. We first through the XElement.Load method load the entire XML stream, then in the ManipulationStarted event handler of the page to read out all book information, and finally we assign the result to the Text property of a TextBlock control. In this way, the whole XML tree gets rendered onto the screen.
XElement.Load
ManipulationStarted
Additionally, for XML files, we can also directly obtain XElement by invoking XElement.Load("/WP7XML;component/bookstore.xml"). You can further dig into this yourself.
OK, now start up the sample page (I've simple modify the line <DefaultTask Name ="_default" NavigationPage="MainPage.xaml"/> in the file WMAppManifest.xml into <DefaultTask Name ="_default" NavigationPage="LocalXMLDealingPage.xaml"/>) and click somewhere on the screen (this will trigger the ManipulationStarted event handler of the page), you will see the entire XML tree rendered out. Below is a screenshot of the sample page.
<DefaultTask Name ="_default" NavigationPage="MainPage.xaml"/>
<DefaultTask Name ="_default" NavigationPage="LocalXMLDealingPage.xaml
Above we've explored a simple example of loading a local XML tree in Windows Phone 7. In fact, the functionalities in Linq to XML in Silverlight for Windows Phone 7 are far more than this. You can continue to look into how to write XML in Silverlight for Windows Phone 7. Let's next shift our attention to another interesting XML related topic.
XAML stands for "Extensible Application Markup Language"; it is the language used by Silverlight to describe what a page should look like. XAML looks a lot like XML, which is based on XML syntax to express items and elements and is quite easy to understand.
In this section, we are going to explore a typical case concerning the XAML data.
In real scenarios, we often need to load some UI elements from files or resource, e.g. the UI elements in an XAML file. Now, let's build a simple example to achieve this.
First, let's prepare an XAML file for read test. We can, of course, create a text file and then rename its extension to .XAML, but the better way is rest upon the quick generating ability of Visual Studio. To do this, follow the steps below:
1. Right click the sample solution WP7XML and select "Add | New Item..." to open the "Add New Item" dialog box.
2. Create an arbitrary file with the extension being .xaml, naming it Ellipse.xaml.
3. Remove the automatically-generated contents, and then add the new contents as follows:
4. Follow the similar steps above to add another file named it Rectangle.xaml.
Now, we create two XAML files: one is a circle; the other is a rectangle, setting up their corresponding properties. In the above operation, please do bear in mind to add the appropriate XML namespaces (xmlns) that are corresponding to Silverlight for Windows Phone 7. Creating the XAML files in Visual Studio has the advantage that you can get help from the designer, seeing the effect in real time. Of course, you can also use Expression Blend to do this. Moreover, do not forget to remove the Code-Behind file *.xaml.cs that Visual Studio automatically generates.
Last but not the least point is do not forget to set the Build Action property of the preceding two files to Resource; or else, running the sample you will catch sight of nothing rendered on the phone.
Build Action
To parse XAML files we need to use the XamlReader class, which provides an important XAML processor engine to analyze XAML and creating corresponding Silverlight object tree. In Silverlight for Windows Phone, it contains only a static Load method; this method is used to parse a well-formed XAML fragment and creates the corresponding Silverlight object tree, and then returns the root of the tree. The method Load accepts a parameter of type string. Although we can pass the code in the XAML file directly to this method, in real-world projects, we usually do not take a hard-coded policy. Instead, we utilized the same method as in the previous section - the GetResourceStream method of the Application class. The result code looks like the following:
Load
Here I create a generic helper method to return objects of different types. In this method, we use the Application.GetResourceStream method to load the XAML file, and then use StreamReader to read it into a string. At last, we pass this string into the Load method of XamlReader to construct the appropriate type.
Application.GetResourceStream
Now that we've got the corresponding objects, the next thing is deal with them on the fly. In this sample, I added two buttons in the program to render two different objects (corresponding to the two XAML file above) on the page. The complete code is as follows.
The above two buttons related Click event handlers are easy to understand. First, judge whether the Grid tag named ContentPanel contains the target component. If so, remove it and then add the one grabbed out of the related XAML file as a child node to the parent Grid control. If not, just add it. That's all.
Well, let's now look at the running-time snapshots.
The above sample is much too simple. However, you can continue to extend your idea to dynamically load your well-designed animations, game sprites, and more...
Now, let's take a look at a more realistic sample associated with remote XML data manipulation- a RSS reader. Since our main interest focuses upon the RSS related XML data manipulation, this sample application only provides the following functionalities:
First of all, it's necessary to give a simple introduction to RSS data.
RSS is a typical XML-based format that allows the syndication of lists of hyperlinks, along with other information or metadata, that helps users decide whether they want to follow the link. Here's an example of a minimal RSS 2.0 feed:
Currently, nearly all web sites provide support for RSS 2.0. From the above listing we can see that a RSS file is mainly composed of nodes, such as rss, channel, and item, etc. The following Table 1 list the standard elements that constitute an RSS file.
Element
Definition
Title
The title of this channel
Link
The hyperlink of the web site for the channel to be linked to
Description
The description info for this channel
Language
(omitted)
managingEditor
webMaster
The info of the main manager of the web site
pubDate
lastBuildDate
image
The image info within this channel
Now that you've gained a fundamental understanding with the typical RSS file scheme. Let's next introduce how to grab the remote RSS data.
Windows Phone applications supports connect to and access remote servers via a variety of connection mechanisms. However, the present version of the network library in WP7 OS does not support direct sockets as those in Microsoft desktop OS. The good news is the Windows phone emulator does bear the same networking abilities as the real device. It can fully leverage the network connectivity of the host PC it is running on. This means that you can test any applications that use the network on your desktop PC.
Till now, many readers may think of the WebClient class in standard Silverlight. Yes, WebClient is not only an easy to use tool, but also provide support for downloading and uploading Web resources. For more details about WebClient, please refer to MSDN. However, an important thing to remember is only the asynchronous methods are available in Windows Phone 7.
Although WebClient makes the above things easy, it comes at some hidden cost. WebClient may block your UI thread (callbacks are made on the UI thread, so any file access will block the UI), but if you go the little bit extra and use HttpWebRequest, your UI will not be blocked.
WebRequest is an abstract base class encapsulating the request/response model for accessing Internet data in the .NET Framework. Compared with the WebClient, the biggest difference is that WebRequest uses a delegate-based asynchronous programming model, whose acts are performed in a background thread. Moreover, only does its request and response deal with the callback code can it interacts with the UI components. WebClient uses an event-based asynchronous programming model, whose invocations are all in the UI thread, so it is simpler to use.
In this simple example, we'll select to use WebClient since the WebRequest way has to deal with a lot of asynchronous stuff. For brevity, we only list the crucial related code.
Obviously, to simplify things, we've fallen back upon Microsoft ready-to-use RSS data manipulation encapsulation model in the namespace System.ServiceModel.Syndication. Note to use the namespace System.ServiceModel.Syndication we have first to add related assembly reference from the path <SystemDrive>:\Program Files\Microsoft SDKs\Silverlight\v4.0\Libraries\Client.
In the above code, the main components should be XmlReader and the method SyndicationFeed.Load. Subsequently, the detailed implementation associated with XML data resolve and store is achieved in the foreach statement.
SyndicationFeed.Load
Now, let's look at the sample UI design and related running-time snapshots. Starting up the sample project WP7RssReader, you will catch sight of the initial screenshot as shown in Figure 5.
First, clicking the button View will trigger the related RSS item viewing page (finished via the WebBrowser control in page details.xaml). Second, clicking the button Menu will lead to a little menu pops up, as shown in Figure 6.
Note in the above sample I've used the ContextMenu control in Silverlight for Windows Phone Toolkit. You can, of course, continue to decorate the menu in more detail. And also, you can easily replace it with the commonly-used Popup component shipped with Silverlight for Windows Phone 7 itself to achieve the similar result.
Next, clicking the menu item Subscribe can invoke the subscription page. Figure 7 indicates the related running-time screenshot.
Clicking the button OK will add the corresponding RSS channel to the Isolated Storage. And, at the same time, the control will return to the main UI. Note, at this time, the UI shows all the RSS items related title and summary. Also note you may wait half a minute or so to see the rendering result. However, hitting the button Cancel will only navigate the control back to the main UI.
Well, we've talked so much for the RSS reader example. You can download and examine the sample project yourself. As for the Isolated Storage related story, I'll introduce it in another article.
In this article we've learned the basic XML manipulation related topics via related samples. As you've seen, the XAML data are also, in essence, XML data. In this article, we've only explored loading objects from an local XAML file dynamically. In fact, this is also a most commonly-used situation in various kinds of Windows Phone applications. Finally, we've introduced a simple RSS reader sample application. However, we've only scratched the surface, only touching upon the XML data related stuff. For further implementation, please study the downloadable sample yourself.
|
http://dotnetslackers.com/articles/silverlight/Windows-Phone-7-Silverlight-Programming-Handling-XML-Data.aspx
|
crawl-003
|
refinedweb
| 2,350
| 55.74
|
Original submitter: sjangity
Description:
Build 925-TP/ML, 918 NB, WinXP
1.5 Project/JDK 1.5.06
1. New userdir
2. Start IDE and w/ new project import ajax complib found here (0.1.1):
3. Drop the AutoComplete Text Component
> project now has a copy of the complib
4. delete the component from designer, so project is not actually using any of
the imported comp's
5. now follow the instructions to delete/deassociate the complib found on this link:
6. when you restart the IDE and load the project, the Source File Error page
surfaces
> turns out the following import stmt is still there in the backing bean
import com.sun.j2ee.blueprints.ui.autocomplete.AutoCompleteComponent;
Evaluation:
Removing a component from the designer does not remove all of its code. "import"
statements still remain.
Evaluation (Entry 2):
Insync has know direct way of knowing which import statements were correct but
simply became incorrect because the user decided to remove the complib. May be
the complib removal action knows that and thus it should tell Insync which
imports should be removes. We could add a service for that. In any case this was
never designed to be supported. This is probably an RFE.
Evaluation (Entry 3):
Has a workaround so making it a P3. Adding a component will add the classes to
the java code, but removing a component only partially removes it because import
statements are still left. It seems like something that the core IDE should be
able to keep track of. I'll assign to insync for more eval. In any case, I don't
think this is a major problem.
Evaluation (Entry 4):
Insync does not do such clean up when the user removes the library. In fact this
kind of error will happen with a normal java project also.
Evaluation (Entry 5):
The problem is that the import statement is not being removed. The workaround is
to remove it manually. You can also use "fix imports". I think this really is a
problem for insync to solve, but I'm afraid of assigning it there. I'll change
this to an RFE because I think there are more serous problems to solve.
Evaluation (Entry 6):
This is more of a refactoring type of RFE. The use case is to remove a component
library and also any components and application related references to that
component. This is a difficult task. See workaround for more info.
Workaround:
Before removing a component library from a project, the user must first remove
all references to classes within the component library. This can include
selecting components from the Navigator and deleting them but typically also
means removing references to those components from the user's application. One
can also run "fix imports". After that, the component library can be removed
from the project.
|
https://bz.apache.org/netbeans/show_bug.cgi?id=93749
|
CC-MAIN-2021-21
|
refinedweb
| 475
| 66.74
|
Highlights
- NumPy is a core Python library every data science professional should be well acquainted with
- This comprehensive NumPy tutorial covers NumPy from scratch, from basic mathematical operations to how Numpy works with image data
- Plenty of Numpy concepts and Python code in this article
Introduction
I am a huge fan of the NumPy library in Python. I have relied on it countless times during my data science journey to perform all sorts of tasks, from basic mathematical operations to using it for image classification!
In short – NumPy is one of the most fundamental libraries in Python and perhaps the most useful of them all. NumPy handles large datasets effectively and efficiently. I can see your eyes glinting at the prospect of mastering NumPy already. 🙂 As a data scientist or as an aspiring data science professional, we need to have a solid grasp on NumPy and how it works in Python.
In this article, I am going to start off by describing what the NumPy library is and why you should prefer it over the ubiquitous but cumbersome Python lists. Then, we will cover some of the most basic NumPy operations that will get you hooked on to this awesome library!
If you’re new to Python, don’t worry! You can take the comprehensive (and free) Python course to learn everything you need to get started with data science programming!
Here’s how we’ll learn NumPy:
- What is the NumPy Library in Python?
- Python list vs NumPy arrays – What’s the Difference?
- Creating a NumPy Array
- Basic ndarray
- Array of zeros
- Array of ones
- Random numbers in ndarray
- An array of your choice
- Imatrix in NumPy
- Evenly spaced ndarray
- The Shape and Reshaping of NumPy Array
- Dimensions of NumPy array
- Shape of NumPy array
- Size of NumPy array
- Reshaping a NumPy array
- Flattening a NumPy array
- Transpose of a NumPy array
- Expanding and Squeezing a NumPy Array
- Expanding a NumPy array
- Squeezing a NumPy array
- Indexing and Slicing of NumPy Array
- Slicing 1-D NumPy arrays
- Slicing 2-D NumPy arrays
- Slicing 3-D NumPy arrays
- Negative slicing of NumPy arrays
- Stacking and Concatenating Numpy Arrays
- Stacking ndarrays
- Concatenating ndarrays
- Broadcasting in Numpy Arrays – A class apart!
- NumPy Ufuncs – The secret of its success!
- Maths with NumPy Arrays
- Mean, Median and Standard deviation
- Min-Max values and their indexes
- Sorting in NumPy Arrays
- NumPy Arrays and Images
What is the NumPy library in Python?
NumPy stands for Numerical Python and is one of the most useful scientific libraries in Python programming. It provides support for large multidimensional array objects and various tools to work with them. Various other libraries like Pandas, Matplotlib, and Scikit-learn are built on top of this amazing library.
Arrays are a collection of elements/values, that can have one or more dimensions. An array of one dimension is called a Vector while having two dimensions is called a Matrix.
NumPy arrays are called ndarray or N-dimensional arrays and they store elements of the same type and size. It is known for its high-performance and provides efficient storage and data operations as arrays grow in size.
NumPy comes pre-installed when you download Anaconda. But if you want to install NumPy separately on your machine, just type the below command on your terminal:
pip install numpy
Now you need to import the library:
import numpy as np
np is the de facto abbreviation for NumPy used by the data science community.
Python Lists vs NumPy Arrays – What’s the Difference?
If you’re familiar with Python, you might be wondering why use NumPy arrays when we already have Python lists? After all, these Python lists act as an array that can store elements of various types. This is a perfectly valid question and the answer to this is hidden in the way Python stores an object in memory.
A Python object is actually a pointer to a memory location that stores all the details about the object, like bytes and the value. Although this extra information is what makes Python a dynamically typed language, it also comes at a cost which becomes apparent when storing a large collection of objects, like in an array.
Python lists are essentially an array of pointers, each pointing to a location that contains the information related to the element. This adds a lot of overhead in terms of memory and computation. And most of this information is rendered redundant when all the objects stored in the list are of the same type!
To overcome this problem, we use NumPy arrays that contain only homogeneous elements, i.e. elements having the same data type. This makes it more efficient at storing and manipulating the array. This difference becomes apparent when the array has a large number of elements, say thousands or millions. Also, with NumPy arrays, you can perform element-wise operations, something which is not possible using Python lists!
This is the reason why NumPy arrays are preferred over Python lists when performing mathematical operations on a large amount of data.
Creating a NumPy Array
Basic ndarray
NumPy arrays are very easy to create given the complex problems they solve. To create a very basic ndarray, you use the np.array() method. All you have to pass are the values of the array as a list:
np.array([1,2,3,4])
Output:
array([1, 2, 3, 4])
This array contains integer values. You can specify the type of data in the dtype argument:
np.array([1,2,3,4],dtype=np.float32)
Output:
array([1., 2., 3., 4.], dtype=float32)
Since NumPy arrays can contain only homogeneous datatypes, values will be upcast if the types do not match:
np.array([1,2.0,3,4])
Output:
array([1., 2., 3., 4.])
Here, NumPy has upcast integer values to float values.
NumPy arrays can be multi-dimensional too.
np.array([[1,2,3,4],[5,6,7,8]])
array([[1, 2, 3, 4], [5, 6, 7, 8]])
Here, we created a 2-dimensional array of values.
Note: A matrix is just a rectangular array of numbers with shape N x M where N is the number of rows and M is the number of columns in the matrix. The one you just saw above is a 2 x 4 matrix.
Array of zeros
NumPy lets you create an array of all zeros using the np.zeros() method. All you have to do is pass the shape of the desired array:
np.zeros(5)
array([0., 0., 0., 0., 0.])
The one above is a 1-D array while the one below is a 2-D array:
np.zeros((2,3))
array([[0., 0., 0.], [0., 0., 0.]])
np.ones(5,dtype=np.int32)
array([1, 1, 1, 1, 1])
Random numbers in ndarrays
Another very commonly used method to create ndarrays is np.random.rand() method. It creates an array of a given shape with random values from [0,1):
# random np.random.rand(2,3)
array([[0.95580785, 0.98378873, 0.65133872], [0.38330437, 0.16033608, 0.13826526]])
An array of your choice
Or, in fact, you can create an array filled with any given value using the np.full() method. Just pass in the shape of the desired array and the value you want:
np.full((2,2),7)
array([[7, 7], [7, 7]])
Imatrix in NumPy
Another great method is np.eye() that returns an array with 1s along its diagonal and 0s everywhere else.
An Identity matrix is a square matrix that has 1s along its main diagonal and 0s everywhere else. Below is an Identity matrix of shape 3 x 3.
Note: A square matrix has an N x N shape. This means it has the same number of rows and columns.
# identity matrix np.eye(3)
array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
However, NumPy gives you the flexibility to change the diagonal along which the values have to be 1s. You can either move it above the main diagonal:
# not an identity matrix
np.eye(3,k=1)
array([[0., 1., 0.], [0., 0., 1.], [0., 0., 0.]])
Or move it below the main diagonal:
np.eye(3,k=-2)
array([[0., 0., 0.], [0., 0., 0.], [1., 0., 0.]])
Note: A matrix is called the Identity matrix only when the 1s are along the main diagonal and not any other diagonal!
Evenly spaced ndarray
You can quickly get an evenly spaced array of numbers using the np.arange() method:
np.arange(5)
array([0, 1, 2, 3, 4])
The start, end and step size of the interval of values can be explicitly defined by passing in three numbers as arguments for these values respectively. A point to be noted here is that the interval is defined as [start,end) where the last number will not be included in the array:
np.arange(2,10,2)
array([2, 4, 6, 8])
Alternate elements were printed because the step-size was defined as 2. Notice that 10 was not printed as it was the last element.
Another similar function is np.linspace(), but instead of step size, it takes in the number of samples that need to be retrieved from the interval. A point to note here is that the last number is included in the values returned unlike in the case of np.arange().
np.linspace(0,1,5)
array([0. , 0.25, 0.5 , 0.75, 1. ])
Great! Now you know how to create arrays using NumPy. But its also important to know the shape of the array.
The Shape and Reshaping of NumPy Arrays
Once you have created your ndarray, the next thing you would want to do is check the number of axes, shape, and the size of the ndarray.
Dimensions of NumPy arrays
You can easily determine the number of dimensions or axes of a NumPy array using the ndims attribute:
# number of axis a = np.array([[5,10,15],[20,25,20]]) print('Array :','\n',a) print('Dimensions :','\n',a.ndim)
Array : [[ 5 10 15] [20 25 20]] Dimensions : 2
This array has two dimensions: 2 rows and 3 columns.
Shape of NumPy array
The shape is an attribute of the NumPy array that shows how many rows of elements are there along each dimension. You can further index the shape so returned by the ndarray to get value along each dimension:
a = np.array([[1,2,3],[4,5,6]]) print('Array :','\n',a) print('Shape :','\n',a.shape) print('Rows = ',a.shape[0]) print('Columns = ',a.shape[1])
Array : [[1 2 3] [4 5 6]] Shape : (2, 3) Rows = 2 Columns = 3
Size of NumPy array
You can determine how many values there are in the array using the size attribute. It just multiplies the number of rows by the number of columns in the ndarray:
# size of array a = np.array([[5,10,15],[20,25,20]]) print('Size of array :',a.size) print('Manual determination of size of array :',a.shape[0]*a.shape[1])
Size of array : 6 Manual determination of size of array : 6
# reshape a = np.array([3,6,9,12]) np.reshape(a,(2,2))
array([[ 3, 6], [ 9, 12]])
Here, I reshaped the ndarray from a 1-D to a 2-D ndarray.
While reshaping, if you are unsure about the shape of any of the axis, just input -1. NumPy automatically calculates the shape when it sees a -1:
a = np.array([3,6,9,12,18,24]) print('Three rows :','\n',np.reshape(a,(3,-1))) print('Three columns :','\n',np.reshape(a,(-1,3)))
Three rows : [[ 3 6] [ 9 12] [18 24]] Three columns : [[ 3 6 9] [12 18 24]]
Flattening a NumPy array
Sometimes when you have a multidimensional array and want to collapse it to a single-dimensional array, you can either use the flatten() method or the ravel() method:
a = np.ones((2,2)) b = a.flatten() c = a.ravel() print('Original shape :', a.shape) print('Array :','\n', a) print('Shape after flatten :',b.shape) print('Array :','\n', b) print('Shape after ravel :',c.shape) print('Array :','\n', c)
Original shape : (2, 2) Array : [[1. 1.] [1. 1.]] Shape after flatten : (4,) Array : [1. 1. 1. 1.] Shape after ravel : (4,) Array : [1. 1. 1. 1.]
But an important difference between flatten() and ravel() is that the former returns a copy of the original array while the latter returns a reference to the original array. This means any changes made to the array returned from ravel() will also be reflected in the original array while this will not be the case with flatten().
b[0] = 0 print(a)
[[1. 1.] [1. 1.]]
The change made was not reflected in the original array.
c[0] = 0 print(a)
[[0. 1.] [1. 1.]]
But here, the changed value is also reflected in the original ndarray.
What is happening here is that flatten() creates a Deep copy of the ndarray while ravel() creates a Shallow copy of the ndarray.
Deep copy means that a completely new ndarray is created in memory and the ndarray object returned by flatten() is now pointing to this memory location. Therefore, any changes made here will not be reflected in the original ndarray.
A Shallow copy, on the other hand, returns a reference to the original memory location. Meaning the object returned by ravel() is pointing to the same memory location as the original ndarray object. So, definitely, any changes made to this ndarray will also be reflected in the original ndarray too.
Transpose of a NumPy array
Another very interesting reshaping method of NumPy is the transpose() method. It takes the input array and swaps the rows with the column values, and the column values with the values of the rows:
a = np.array([[1,2,3], [4,5,6]]) b = np.transpose(a) print('Original','\n','Shape',a.shape,'\n',a) print('Expand along columns:','\n','Shape',b.shape,'\n',b)
Original Shape (2, 3) [[1 2 3] [4 5 6]] Expand along columns: Shape (3, 2) [[1 4] [2 5] [3 6]]
On transposing a 2 x 3 array, we got a 3 x 2 array. Transpose has a lot of significance in linear algebra.
Expanding and Squeezing a NumPy array
Expanding a NumPy array
You can add a new axis to an array using the expand_dims() method by providing the array and the axis along which to expand:
# expand dimensions a = np.array([1,2,3]) b = np.expand_dims(a,axis=0) c = np.expand_dims(a,axis=1) print('Original:','\n','Shape',a.shape,'\n',a) print('Expand along columns:','\n','Shape',b.shape,'\n',b) print('Expand along rows:','\n','Shape',c.shape,'\n',c)
Original: Shape (3,) [1 2 3] Expand along columns: Shape (1, 3) [[1 2 3]] Expand along rows: Shape (3, 1) [[1] [2] [3]]
Squeezing a NumPy array
On the other hand, if you instead want to reduce the axis of the array, use the squeeze() method. It removes the axis that has a single entry. This means if you have created a 2 x 2 x 1 matrix, squeeze() will remove the third dimension from the matrix:
# squeeze a = np.array([[[1,2,3], [4,5,6]]]) b = np.squeeze(a, axis=0) print('Original','\n','Shape',a.shape,'\n',a) print('Squeeze array:','\n','Shape',b.shape,'\n',b)
Original Shape (1, 2, 3) [[[1 2 3] [4 5 6]]] Squeeze array: Shape (2, 3) [[1 2 3] [4 5 6]]
However, if you already had a 2 x 2 matrix, using squeeze() in that case would give you an error:
# squeeze a = np.array([[1,2,3], [4,5,6]]) b = np.squeeze(a, axis=0) print('Original','\n','Shape',a.shape,'\n',a) print('Squeeze array:','\n','Shape',b.shape,'\n',b)
Indexing and Slicing of NumPy array
So far, we have seen how to create a NumPy array and how to play around with its shape. In this section, we will see how to extract specific values from the array using indexing and slicing.
Slicing 1-D NumPy arrays
Slicing means retrieving elements from one index to another index. All we have to do is to pass the starting and ending point in the index like this: [start: end].
However, you can even take it up a notch by passing the step-size. What is that? Well, suppose you wanted to print every other element from the array, you would define your step-size as 2, meaning get the element 2 places away from the present index.
Incorporating all this into a single index would look something like this: [start:end:step-size].
a = np.array([1,2,3,4,5,6]) print(a[1:5:2])
[2 4]
Notice that the last element did not get considered. This is because slicing includes the start index but excludes the end index.
A way around this is to write the next higher index to the final index value you want to retrieve:
a = np.array([1,2,3,4,5,6]) print(a[1:6:2])
[2 4 6]
If you don’t specify the start or end index, it is taken as 0 or array size, respectively, as default. And the step-size by default is 1.
a = np.array([1,2,3,4,5,6]) print(a[:6:2]) print(a[1::2]) print(a[1:6:])
[1 3 5] [2 4 6] [2 3 4 5 6]
Slicing 2-D NumPy arrays
Now, a 2-D array has rows and columns so it can get a little tricky to slice 2-D arrays. But once you understand it, you can slice any dimension array!
Before learning how to slice a 2-D array, let’s have a look at how to retrieve an element from a 2-D array:
a = np.array([[1,2,3], [4,5,6]]) print(a[0,0]) print(a[1,2]) print(a[1,0])
1 6 4
Here, we provided the row value and column value to identify the element we wanted to extract. While in a 1-D array, we were only providing the column value since there was only 1 row.
So, to slice a 2-D array, you need to mention the slices for both, the row and the column:
a = np.array([[1,2,3],[4,5,6]]) # print first row values print('First row values :','\n',a[0:1,:]) # with step-size for columns print('Alternate values from first row:','\n',a[0:1,::2]) # print('Second column values :','\n',a[:,1::2]) print('Arbitrary values :','\n',a[0:1,1:3])
First row values : [[1 2 3]] Alternate values from first row: [[1 3]] Second column values : [[2] [5]] Arbitrary values : [[2 3]]
Slicing 3-D NumPy arrays
So far we haven’t seen a 3-D array. Let’s first visualize how a 3-D array looks like:
a = np.array([[[1,2],[3,4],[5,6]],# first axis array [[7,8],[9,10],[11,12]],# second axis array [[13,14],[15,16],[17,18]]])# third axis array # 3-D array print(a)
[[[ 1 2] [ 3 4] [ 5 6]] [[ 7 8] [ 9 10] [11 12]] [[13 14] [15 16] [17 18]]]
In addition to the rows and columns, as in a 2-D array, a 3-D array also has a depth axis where it stacks one 2-D array behind the other. So, when you are slicing a 3-D array, you also need to mention which 2-D array you are slicing. This usually comes as the first value in the index:
# value print('First array, first row, first column value :','\n',a[0,0,0]) print('First array last column :','\n',a[0,:,1]) print('First two rows for second and third arrays :','\n',a[1:,0:2,0:2])
First array, first row, first column value : 1 First array last column : [2 4 6] First two rows for second and third arrays : [[[ 7 8] [ 9 10]] [[13 14] [15 16]]]
If in case you wanted the values as a single dimension array, you can always use the flatten() method to do the job!
print('Printing as a single array :','\n',a[1:,0:2,0:2].flatten())
Printing as a single array : [ 7 8 9 10 13 14 15 16]
Negative slicing of NumPy arrays
An interesting way to slice your array is to use negative slicing. Negative slicing prints elements from the end rather than the beginning. Have a look below:
a = np.array([[1,2,3,4,5], [6,7,8,9,10]]) print(a[:,-1])
[ 5 10]
Here, the last values for each row were printed. If, however, we wanted to extract from the end, we would have to explicitly provide a negative step-size otherwise the result would be an empty list.
print(a[:,-1:-3:-1])
[[ 5 4] [10 9]]
Having said that, the basic logic of slicing remains the same, i.e. the end index is never included in the output.
An interesting use of negative slicing is to reverse the original array.
a = np.array([[1,2,3,4,5], [6,7,8,9,10]]) print('Original array :','\n',a) print('Reversed array :','\n',a[::-1,::-1])
Original array : [[ 1 2 3 4 5] [ 6 7 8 9 10]] Reversed array : [[10 9 8 7 6] [ 5 4 3 2 1]]
You can also use the flip() method to reverse an ndarray.
a = np.array([[1,2,3,4,5], [6,7,8,9,10]]) print('Original array :','\n',a) print('Reversed array vertically :','\n',np.flip(a,axis=1)) print('Reversed array horizontally :','\n',np.flip(a,axis=0))
Original array : [[ 1 2 3 4 5] [ 6 7 8 9 10]] Reversed array vertically : [[ 5 4 3 2 1] [10 9 8 7 6]] Reversed array horizontally : [[ 6 7 8 9 10] [ 1 2 3 4 5]]
Stacking and Concatenating NumPy arrays
Stacking ndarrays
You can create a new array by combining existing arrays. This you can do in two ways:
- Either combine the arrays vertically (i.e. along the rows) using the vstack() method, thereby increasing the number of rows in the resulting array
- Or combine the arrays in a horizontal fashion (i.e. along the columns) using the hstack(), thereby increasing the number of columns in the resultant array
a = np.arange(0,5) b = np.arange(5,10) print('Array 1 :','\n',a) print('Array 2 :','\n',b) print('Vertical stacking :','\n',np.vstack((a,b))) print('Horizontal stacking :','\n',np.hstack((a,b)))
Array 1 : [0 1 2 3 4] Array 2 : [5 6 7 8 9] Vertical stacking : [[0 1 2 3 4] [5 6 7 8 9]] Horizontal stacking : [0 1 2 3 4 5 6 7 8 9]
A point to note here is that the axis along which you are combining the array should have the same size otherwise you are bound to get an error!
a = np.arange(0,5) b = np.arange(5,9) print('Array 1 :','\n',a) print('Array 2 :','\n',b) print('Vertical stacking :','\n',np.vstack((a,b))) print('Horizontal stacking :','\n',np.hstack((a,b)))
Another interesting way to combine arrays is using the dstack() method. It combines array elements index by index and stacks them along the depth axis:
a = [[1,2],[3,4]] b = [[5,6],[7,8]] c = np.dstack((a,b)) print('Array 1 :','\n',a) print('Array 2 :','\n',b) print('Dstack :','\n',c) print(c.shape)
Array 1 : [[1, 2], [3, 4]] Array 2 : [[5, 6], [7, 8]] Dstack : [[[1 5] [2 6]] [[3 7] [4 8]]] (2, 2, 2)
Concatenating ndarrays
While stacking arrays is one way of combining old arrays to get a new one, you could also use the concatenate() method where the passed arrays are joined along an existing axis:
a = np.arange(0,5).reshape(1,5) b = np.arange(5,10).reshape(1,5) print('Array 1 :','\n',a) print('Array 2 :','\n',b) print('Concatenate along rows :','\n',np.concatenate((a,b),axis=0)) print('Concatenate along columns :','\n',np.concatenate((a,b),axis=1))
Array 1 : [[0 1 2 3 4]] Array 2 : [[5 6 7 8 9]] Concatenate along rows : [[0 1 2 3 4] [5 6 7 8 9]] Concatenate along columns : [[0 1 2 3 4 5 6 7 8 9]]
The drawback of this method is that the original array must have the axis along which you want to combine. Otherwise, get ready to be greeted by an error.
Another very useful function is the append method that adds new elements to the end of a ndarray. This is obviously useful when you already have an existing ndarray but want to add new values to it.
# append values to ndarray a = np.array([[1,2], [3,4]]) np.append(a,[[5,6]], axis=0)
array([[1, 2], [3, 4], [5, 6]])
Broadcasting in NumPy arrays – A class apart!
Broadcasting is one of the best features of ndarrays. It lets you perform arithmetics operations between ndarrays of different sizes or between an ndarray and a simple number!
Broadcasting essentially stretches the smaller ndarray so that it matches the shape of the larger ndarray:
a = np.arange(10,20,2) b = np.array([[2],[2]]) print('Adding two different size arrays :','\n',a+b) print('Multiplying an ndarray and a number :',a*2)
Adding two different size arrays : [[12 14 16 18 20] [12 14 16 18 20]] Multiplying an ndarray and a number : [20 24 28 32 36]
Its working can be thought of like stretching or making copies of the scalar, the number, [2, 2, 2] to match the shape of the ndarray and then perform the operation element-wise. But no such copies are being made. It is just a way of thinking about how broadcasting is working.
This is very useful because it is more efficient to multiply an array with a scalar value rather than another array! It is important to note that two ndarrays can broadcast together only when they are compatible.
Ndarrays are compatible when:
- Both have the same dimensions
- Either of the ndarrays has a dimension of 1. The one having a dimension of 1 is broadcast to meet the size requirements of the larger ndarray
In case the arrays are not compatible, you will get a ValueError.
a = np.ones((3,3)) b = np.array([2]) a+b
array([[3., 3., 3.], [3., 3., 3.], [3., 3., 3.]])
Here, the second ndarray was stretched, hypothetically, to a 3 x 3 shape, and then the result was calculated.
NumPy Ufuncs – The secret of its success!
Python is a dynamically typed language. This means the data type of a variable does not need to be known at the time of the assignment. Python will automatically determine it at run-time. While this means a cleaner and easier code to write, it also makes Python sluggish.
This problem manifests itself when Python has to do many operations repeatedly, like the addition of two arrays. This is so because each time an operation needs to be performed, Python has to check the data type of the element. This problem is overcome by NumPy using the ufuncs function.
The way NumPy makes this work faster is by using vectorization. Vectorization performs the same operation on ndarray in an element-by-element fashion in a compiled code. So the data types of the elements do not need to be determined every time, thereby performing faster operations.
ufuncs are Universal functions in NumPy that are simply mathematical functions. They perform fast element-wise functions. They are called automatically when you are performing simple arithmetic operations on NumPy arrays because they act as wrappers for NumPy ufuncs.
For example, when adding two NumPy arrays using ‘+’, the NumPy ufunc add() is automatically called behind the scene and quietly does its magic:
a = [1,2,3,4,5] b = [6,7,8,9,10] %timeit a+b
a = np.arange(1,6) b = np.arange(6,11) %timeit a+b
You can see how the same addition of two arrays has been done in significantly less time with the help of NumPy ufuncs!
Maths with NumPy arrays
Here are some of the most important and useful operations that you will need to perform on your NumPy array.
Basic arithmetic operations on NumPy arrays
The basic arithmetic operations can easily be performed on NumPy arrays. The important thing to remember is that these simple arithmetics operation symbols just act as wrappers for NumPy ufuncs.
print('Subtract :',a-5) print('Multiply :',a*5) print('Divide :',a/5) print('Power :',a**2) print('Remainder :',a%5)
Subtract : [-4 -3 -2 -1 0] Multiply : [ 5 10 15 20 25] Divide : [0.2 0.4 0.6 0.8 1. ] Power : [ 1 4 9 16 25] Remainder : [1 2 3 4 0]
Mean, Median and Standard deviation
To find the mean and standard deviation of a NumPy array, use the mean(), std() and median() methods:
a = np.arange(5,15,2) print('Mean :',np.mean(a)) print('Standard deviation :',np.std(a)) print('Median :',np.median(a))
Mean : 9.0 Standard deviation : 2.8284271247461903 Median : 9.0
Min-Max values and their indexes
Min and Max values in an ndarray can be easily found using the min() and max() methods:
a = np.array([[1,6], [4,3]]) # minimum along a column print('Min :',np.min(a,axis=0)) # maximum along a row print('Max :',np.max(a,axis=1))
Min : [1 3] Max : [6 4]
You can also easily determine the index of the minimum or maximum value in the ndarray along a particular axis using the argmin() and argmax() methods:
a = np.array([[1,6,5], [4,3,7]]) # minimum along a column print('Min :',np.argmin(a,axis=0)) # maximum along a row print('Max :',np.argmax(a,axis=1))
Min : [0 1 0] Max : [1 2]
Let me break down the output for you. The minimum value for the first column is the first element along the column. For the second column, it is the second element. And for the third column, it is the first element.
You can similarly determine what the output for maximum values indicates.
Sorting in NumPy arrays
For any programmer, the time complexity of any algorithm is of prime essence. Sorting is an important and very basic operation that you might well use on a daily basis as a data scientist. So, it is important to use a good sorting algorithm with minimum time complexity.
The NumPy library is a legend when it comes to sorting elements of an array. It has a range of sorting functions that you can use to sort your array elements. It has implemented quicksort, heapsort, mergesort, and timesort for you under the hood when you use the sort() method:
a = np.array([1,4,2,5,3,6,8,7,9]) np.sort(a, kind='quicksort')
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
You can even sort the array along any axis you desire:
a = np.array([[5,6,7,4], [9,2,3,7]])# sort along the column print('Sort along column :','\n',np.sort(a, kind='mergresort',axis=1)) # sort along the row print('Sort along row :','\n',np.sort(a, kind='mergresort',axis=0))
Sort along column : [[4 5 6 7] [2 3 7 9]] Sort along row : [[5 2 3 4] [9 6 7 7]]
NumPy arrays and Images
NumPy arrays find wide use in storing and manipulating image data. But what is image data really?
Images are made up of pixels that are stored in the form of an array. Each pixel has a value ranging between 0 to 255 – 0 indicating a black pixel and 255 indicating a white pixel. A colored image consists of three 2-D arrays, one for each of the color channels: Red, Green, and Blue, placed back-to-back thus making a 3-D array. Each value in the array constitutes a pixel value. So, the size of the array depends on the number of pixels along each dimension.
Have a look at the image below:
Python can read the image as an array using the scipy.misc.imread() method in the SciPy library. And when we output it, it is simply a 3-D array containing the pixel values:
import numpy as np import matplotlib.pyplot as plt from scipy import misc # read image im = misc.imread('./original.jpg') # image im
array([[[115, 106, 67], [113, 104, 65], [112, 103, 64], ..., [160, 138, 37], [160, 138, 37], [160, 138, 37]], [[117, 108, 69], [115, 106, 67], [114, 105, 66], ..., [157, 135, 36], [157, 135, 34], [158, 136, 37]], [[120, 110, 74], [118, 108, 72], [117, 107, 71], ...,
We can check the shape and type of this NumPy array:
print(im.shape) print(type(type))
(561, 997, 3) numpy.ndarray
Now, since an image is just an array, we can easily manipulate it using an array function that we have looked at in the article. Like, we could flip the image horizontally using the np.flip() method:
# flip plt.imshow(np.flip(im, axis=1))
Or you could normalize or change the range of values of the pixels. This is sometimes useful for faster computations.
im/255
array([[[0.45098039, 0.41568627, 0.2627451 ], [0.44313725, 0.40784314, 0.25490196], [0.43921569, 0.40392157, 0.25098039], ..., [0.62745098, 0.54117647, 0.14509804], [0.62745098, 0.54117647, 0.14509804], [0.62745098, 0.54117647, 0.14509804]], [[0.45882353, 0.42352941, 0.27058824], [0.45098039, 0.41568627, 0.2627451 ], [0.44705882, 0.41176471, 0.25882353], ..., [0.61568627, 0.52941176, 0.14117647], [0.61568627, 0.52941176, 0.13333333], [0.61960784, 0.53333333, 0.14509804]], [[0.47058824, 0.43137255, 0.29019608], [0.4627451 , 0.42352941, 0.28235294], [0.45882353, 0.41960784, 0.27843137], ..., [0.6 , 0.52156863, 0.14117647], [0.6 , 0.52156863, 0.13333333], [0.6 , 0.52156863, 0.14117647]], ...,
Remember this is using the same concept of ufuncs and broadcasting that we saw in the article!
There are a lot more things that you could do to manipulate your images that would be useful when you are classifying images using Neural Networks. If you are interested in building your own image classifier, you could head here for an amazing tutorial on the topic!
End Notes
Phew – take a deep breath. We’ve covered a LOT of ground in this article. You are well acquainted with the use of NumPy arrays and are all guns blazing to incorporate it into your daily analysis tasks.
To get to know more about any NumPy function, check out their official documentation where you will find a detailed description of each and every function.
Going forward, I recommend you to explore the following courses on Data Science to help in your journey to becoming an awesome Data Scientist!
|
https://www.analyticsvidhya.com/blog/2020/04/the-ultimate-numpy-tutorial-for-data-science-beginners/?share=facebook
|
CC-MAIN-2020-40
|
refinedweb
| 5,851
| 62.68
|
Using your project docs inside the application
The applications I work on have markdown docs. These can be in the docs/ folder for example as
docs/webhooks.md
But some of these docs have value to the user of the UI not just the developer, and when we include these docs inside the application repo it is a TON easier to just update them as you fix and make new features in the codebase.
You can have the best of both worlds with a simple to use library
The Controller
This then allows me, in my controllers to get some content from these docs, for example
<?php namespace App\Http\Controllers; use App\Http\Requests; use Michelf\MarkdownExtra; class HelpController extends Controller { public function api() { $text = file_get_contents(base_path('docs/webhooks.md')); $webhooks = MarkdownExtra::defaultTransform($text); return view('help.api', compact('webhooks')); } }
The Blade Template File
Then in the blade template all I need to do to show those docs are
@extends('layouts.default') @section('content') <div class="page-header"> <h1>API Help</h1> </div> <div class="row"> <div class="col-lg-12"> <div class="wrapper wrapper-content animated fadeInRight"> <div class="ibox-content"> {!! $webhooks !!} </div> </div> </div> </div> @endsection
Being a private repo we review the code so using "{!!" is not so bad. But keep in mind you are trusting what is in these files! Of course a simple
$webhooks = strip_tags($webhooks, "tags you allow here");
Will help out there.
The Markdown
Then just write your file as normal in markdown!
|
https://alfrednutile.info/posts/157
|
CC-MAIN-2018-47
|
refinedweb
| 251
| 62.88
|
You. You could say that only the language changes the classes are largely unaltered.
Start a new C# Windows forms project and add a reference to:
dtSearchNetApi4.dll
which you will generally find in
C:\Program Files\dtSearch Developer\bin
or
C:\Program Files (x86)\dtSearch Developer\bin
(there are versions for earlier .NET assemblies but in most cases version 4 is what you should be using).
Also add:
using dtSearch.Engine;
to save having to type out fully qualified names.
Now get started all you really need to know is the key class that does most of the work related to search is SearchJob. Whenever you are trying to get to grips with a new API finding the class (or small number of classes) where it all starts is usually the way to get on top of it fast. In this case once you know that SearchJob is what you need to set up a search of an index it is all remarkably easy.
Place a button on the form and in its click event handler we first create an instance of SearchJob:
SearchJob SJob1 = new SearchJob();
Before we can perform the search we need to setup some details. First we need to specify the location fo the index:
SJob1.IndexesToSearch.Add(@"C:\Users\ ian\AppData\Local\dtSearch\test");
Of course you have to replace the string with the full path to the index you are using. Notice that you can specify multiple indexes because the property is a collection.
Next we need to specify what we are searching for. This can be done in two ways. Using the Request property to specify search terms or using the BooleanConditions property to specify a logical expression involving search terms. For example:
SJob1.BooleanConditions = "Hello and World";
will search for documents containing "Hello" and "World" in the index and
SJob1.BooleanConditions = "Hello or World";
will search for documents containing "Hello" or "World" in the index. Following this there are a range of optional parameters you can set. For example:
SJob1.MaxFilesToRetrieve = 10;
You can set all of the more sophisticated search options at this point - filters, stemming, fuzzy search, exclusions etc.
Now we are already to perform the search. You can do it as a blocking call or you can use an event to work asynchronously. The simplest option is to use a blocking call:
SJob1.Execute();
but note that once you call this method your entire application is frozen until the search is complete or an error occurs. This isn't too bad with a small index but of course it quickly becomes unacceptable. As well as using an event to process the data asynchronously you could also use a worker thread to run the search - again not difficult but not specific to using dtSearch.
When the call to Execute complete the SearchJob instance has properties which return the results of the search. For example, the HitCount property gives an integer that holds the number of hits the search returned. For example:
MessageBox.Show( SJob1.HitCount.ToString());
More importantly SearchJob returns a SearchResults object via its Results property. This provides a collection of documents that the search found. To make use of this collection you have to make use of the GetNthDoc method to make the nth document the current document and then you can use various properties to return its details. For example:
SearchResults results = SJob1.Results;for (int i = 0; i < results.Count; ++i){ results.GetNthDoc(i); listBox1.Items.Add(results.DocName);}
Which simply adds the document names to a ListBox placed on the form.
Yes it really is this easy.
Of course I've left out the usual error handling to make it easier to follow but this isn't difficult to add - there is an Error property that you can test. It also doesn't take into account that the results object could be very large indeed. In this case garbage collection might be a problem so you should use the "using" construct to ensure the the results object is disposed of when you are finished with it. Again not difficult.
The next step in most uses of an index search is converting the results to something more suitable. See the FileConverter Class for an easy way to convert to HTML, RTF or text. You can also export the results as XML. If you also want to control the construction and maintenance of the index itself then you need to look up the IndexJob object which is very similar to the SearchJob object.
Building an application around dtSearch is more a matter of what you do with the search results and in many cases how you allow the search to be specified by the user.
Then there are many other features that we haven't even mentioned - CDsearch, Websearch and setting up the web Spider to name just three, but these are other stories.
To try dtSearch for yourself download the 30-day evaluation from dtsearch.com.
|
https://www.i-programmer.info/programming/database/2701-getting-started-with-dtsearch.html?start=1
|
CC-MAIN-2018-30
|
refinedweb
| 831
| 63.9
|
#include <Wire.h>#include <RTClib.h> // Real time clock#include <Adafruit_Sensor.h> // Barometer#include <Adafruit_BMP085_U.h> // Barometer#include <SPI.h> // SD card#include <SD.h> // SD card// Libraries from LCD example//#include <OneWire.h>//#include <LiquidCrystal.h>/*
I need a way to identify what String() function or what piece of library code is messing up my development projects.
So the question is: how do I trace the program execution,
LiquidCrystal lcd(2,3,6,7,8,9); // lcd(RS,EN,D4,D5,D6,D7)
The compiler message doesn't seem to say I am running out of memory.
can you show me a code example of printing out the address of a string variable?
char string [20];.........Serial.println ((unsigned int)string, HEX);// ORSerial.println ((unsigned int)&string[0], HEX);
is there any way I can create a couple of pointers that will tell me the exact memory addressof some of my data values?
If I am out of memory well how do I print out the memory addresses of the last data items?
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=216710.0
|
CC-MAIN-2016-18
|
refinedweb
| 209
| 68.06
|
Data::Type::Filter - cleans values before normally subjecting to facets
package Data::Type::Object::std_langcode; ... sub _filters : method { return ( [ 'strip', '\s' ], [ 'chomp' ], [ 'lc' ] ) }
package Data::Type::Filter::chomp; our @ISA = ( 'Data::Type::Filter::Interface' ); our $VERSION = '0.01.25'; sub desc : method { 'chomps' } sub info : method { 'chomps' } sub filter : method { my $this = shift; chomp $Data::Type::value; }
Chomps (as perl
chomp()).
Lower cases (as perl
lc()).
Upper cases (as perl
uc()).
A simple s/what// operation as
$Data::Type::value =~ s/$what//go;
Collapses any arbitrary repeats of what to a single.
Sourceforge is hosting a project dedicated to this module. And I enjoy receiving your comments/suggestion/reports also via or.
Murat Uenalan, <muenalan@cpan.org>
|
http://search.cpan.org/~muenalan/Data-Type-0.02.02/lib/Data/Type/Filter.pm
|
CC-MAIN-2017-04
|
refinedweb
| 117
| 52.56
|
I had an app that was working properly with old verions of wxpython
Now with wxpython 3.0, when trying to run the app, I get the following error
File "C:\Python27\lib\site-packages\wx-3.0-msw\wx\_controls.py", line 6523, in __init__
_controls_.DatePickerCtrl_swiginit(self,_controls_.new_DatePickerCtrl(*args, **kwargs))
wx._core!
File "C:\Users\hadi\Dropbox\Projects\Python\dialysis\profile.py", line 159, in __init__
style=wx.DP_DROPDOWN)
I know it's been a while since this question was asked, but I just had the same issue and thought I'd add my solution in case someone else finds this thread. Basically what's happening is that the locale of your script is somehow conflicting with the locale of the machine, although I'm not sure how or why. Maybe someone else with more specific knowledge on this can fill that in. Try manually setting the locale using the wxPython object wx.Locale:
locale = wx.Locale(wx.LANGUAGE_ENGLISH)
However, make sure that you assign the output to a non-local variable. As soon as the variable goes out of scope, the Locale object is destructed. So if it's in a class:
class MyApp(wx.App):
...
def OnInit(self):
self.locale = wx.Locale(wx.LANGUAGE_ENGLISH)
...
|
https://codedump.io/share/dd9wAWmKCkes/1/wxpython-30-breaks-older-apps-locale-error
|
CC-MAIN-2016-50
|
refinedweb
| 209
| 59.6
|
IETF Draft Sets up Public Namespaces!!"
Slashdotted (Score:2, Informative).
Re:Some further possibilities (Score:3, Informative)
But my major point is that metadata without trust is not very useful in today's world. Any reference I made to links was only incidental (describing the current search engine situation). formatting for a Library of Congress control number URI. And so on. Any organization which wishes to standardize its namespace can apply to NISO to Make It So (tm). NISO assumes the responsibility of making sure that if the Library of Congress is using "lccn", then the Literary Clubs of Congo Nationalists cannot. And thats it. Thats all this does.
Re:important info (Score:4, Informative)
|
https://developers.slashdot.org/story/03/09/30/164210/ietf-draft-sets-up-public-namespaces/informative-comments
|
CC-MAIN-2017-13
|
refinedweb
| 115
| 52.97
|
Library tutorials & articles
Retrieving HTTP content in .NET
- Introduction
- New HTTP tools in .NET
- HTTP Cookies
- Wrapping it up
- POSTing data
- Firing events
- We can walk and chew gum at the same time!
New HTTP tools in .NET
The .NET Framework provides new tools for retrieving HTTP content that are powerful and scalable in a single package. If you've ever worked in pre-.NET applications and tried to retrieve HTTP content you probably know that there are a number of different tools available: WinInet (Win32 API), XMLHTTP (part of MSXML) and recently the new WinHTTP COM library. These tools invariably all worked in some situations, but none of them really fit the bill for all instances. For example, WinInet can't scale on the server with no multi-threading support. XMLHTTP was too simple and didn't support all aspects of the HTTP model. WinHTTP which is the latest Microsoft tool for COM solves many of these problems but it doesn't work at all on Win9x, which makes it a bad choice for a client tool integrated into broad distribution apps at least for the moment until XP take a strong hold.
The .NET framework greatly simplifies HTTP access with a pair of classes
HttpWebRequest and
HttpWebResponse. These classes provide just about all of the functionality provided through the HTTP protocol in a straightforward manner. The basics of returning content from the Web requires very little code (see Listing 1).
Listing 1: Simple retrieval of Web data over HTTP.
string lcUrl = "";
// *** Establish the request
HttpWebRequest loHttp =
(HttpWebRequest) WebRequest.Create(lcUrl);
// *** Set properties
loHttp.Timeout = 10000; // 10 secs
loHttp.UserAgent = "Code Sample Web Client";
// *** Retrieve request info headers
HttpWebResponse loWebResponse = (HttpWebResponse) loHttp.GetResponse();
Encoding enc = Encoding.GetEncoding(1252); // Windows default Code Page
StreamReader loResponseStream =
new StreamReader(loWebResponse.GetResponseStream(),enc);
string lcHtml = loResponseStream.ReadToEnd();
loWebResponse.Close();
loResponseStream.Close();
Pretty simple, right? But beneath this simplicity lies a lot of power too. Let's start with looking at how this works.
Start by creating the
HttpWebRequest object which is the base object used to initiate a Web request. A call to the static
WebRequest.Create() method is used to parse the URL and pass the resolved URL into the request object. This call will throw an exception if the URL passed has invalid URL syntax. which map directly to header values that get).
In the example, I do nothing much with the request other than setting a couple of the optional properties – the UserAgent (the client 'browser' which is blank otherwise) and the Timeout for the request. If you need to POST data to the server you'll need to do a little more work – I'll talk about this a little later.
Streaming good deals
Once the HTTP Request is configured for sending the data, a call to
GetResponse() actually goes out and sends the HTTP request to the Web Server. At this point the request sends the headers and retrieves the first HTTP result buffer from the Web Server.
When the code above. The stream points at the actual binary HTTP response from the Web server. Streams give you a lot of flexibility in handling how data is retriveved from the web server.
As mentioned, the call to
GetResponse() only returned an initial internal buffer – to retrieve the actual data and read the rest of the result document from the Web server you have to read the stream.
In the example above I use a StreamReader object to return a string from the data in a single operation. But realize that because a stream is returned I could access the stream directly and read smaller chunks to say provide status information on progress of the HTTP download.
Notice also that when the StreamReader is created I had to explicitly provide an encoding type – in this case CodePage 1252 which is the Windows default codepage. This is important because the data is transferred as a byte stream and without the encoding it would result in invalid character translations for any extended characters. CodePage 1252 works fairly well for English or European language content, as well as binary content. Ideally though you will need to decide at runtime which encoding to use – for example a binary file probably should write a stream out to file or other location rather than converting to a string, while a page from Japan should use the appropriate Unicode encoding for that language.
StreamReader also exposes the underlying raw stream using the BaseStream property, so StreamReader is a good object to use to pass streamed data around.
POSTing data
The example above only retrieves data which is essentially an HTTP GET request. If you want to send data to the server you can use an HTTP POST operation. POSTing data refers to the process of taking data and sending it to the Web server as part of the request payload. A POST operation both sends data to the server and retrieves a response.
Posting uses a stream to send the data to the server, so the process of posting data is pretty much the reverse of retrieving the data (see listing 2).
Listing 2: POSTing data to the Web Server
string lcUrl = "";
HttpWebRequest loHttp =
(HttpWebRequest) WebRequest.Create(lcUrl);
// *** Send any POST data
string lcPostData =
"Name=" + HttpUtility.UrlEncode("Rick Strahl") +
"&Company=" + HttpUtility.UrlEncode("West Wind ");
loHttp.Method="POST";
byte [] lbPostBuffer = System.Text.
Encoding.GetEncoding(1252).GetBytes(lcPostData);
loHttp.ContentLength = lbPostBuffer.Length;
Stream loPostData = loHttp.GetRequestStream();
loPostData.Write(lbPostBuffer,0,lbPostBuffer.Length);
loPostData.Close();
HttpWebResponse loWebResponse = (HttpWebResponse) loHttp.GetResponse();
Encoding enc = System.Text.Encoding.GetEncoding(1252);
StreamReader loResponseStream =
new StreamReader(loWebResponse.GetResponseStream(),enc);
string lcHtml = loResponseStream.ReadToEnd();
loWebResponse.Close();
loResponseStream.Close();
Make sure you use the this POST code immediately before the
HttpWebRequest.GetResponse() call. All other manipulation of the Request object has no effect as the headers get send with the POST buffer. The rest of the code is identical to what was shown before – You retrieve the Response and then read the stream to grab the result data.
POST data needs to be properly encoded when sent to the server. If you're posting information to a Web page you'll have to make sure to properly encode your POST buffer into key value pairs and using
URLEncoding for the values. You can utilize the static method
System.Web.HttpUtility.UrlEncode() to encode the data. In this case make sure to include the System.Web namespace in your project. Note this is necessary only if you're posting to a typical HTML page – if you're posting XML or other application content you can just post the raw data as is. This is all much easier to do using a custom class like the one included with this article. This class has an
AddPostKey method and depending on the POST mode it will take any parameters and properly encode them into an internally manage stream which is then POSTed to the server.
To send the actual data in the POST buffer the data has to be converted to a byte array first. Again we need to properly encode the string.
Using Encoding.GetEncoding(1252) encoding with the
GetBytes() method which returns a byte array using the Windows standard ANSI code page. You should then set the
ContentLength property so the server can know the size of the data stream coming in. Finally you can write the POST data to the server using an output stream returned from
HttpWebRequest.GetRequestStream(). Simply write the entire byte array out to the stream in one
Write() method call with the appropriate size of the byte array. This writes the data and waits for completion. As with the retrieval operation the stream operations are what actually causes data to be sent to the server so if you want to provide progress information you can send smaller chunks and provide feedback to the user if needed..
|
http://www.developerfusion.com/article/4637/retrieving-http-content-in-net/2/
|
crawl-002
|
refinedweb
| 1,314
| 55.64
|
*
how are the objects passed by value or by reference
Chandra Bairi
Ranch Hand
Joined: Sep 12, 2003
Posts: 152
posted
Nov 12, 2003 02:48:00
0
hello can anyone tell me how the objects in
java
are passed by value or by reference. some books say they are passed by reference and some books say they are passed by value. what is true exactly are there any cases where they are passed by value or they are passed by reference.
normally i feel objects in java are passed by reference. can anyone hlep me out on this matter.
thanks.
Thanks,
Shekar
Jonathan Zaleski
Greenhorn
Joined: Nov 11, 2003
Posts: 2
posted
Nov 12, 2003 03:44:00
0
Java tenatively passes all parameters to a method 'by value.' That means the current value of the actual parameter is copied into the formal parameter in the method header. Essentially, parameter passing is like an assignment statement, assigning to the formal parameter a copy of the value stored in the actual parameter.
This issue ultimately must be considered when making changes to a formal parameter inside a method. The formal parameter is a seperate copy of the value that was passed in, so any changes made to it have no effect on the actual parameter. After control returns to the calling method, the actual parameter will have the same value as it did, prior to the method being called.
Calling an object 'by reference' calls the actual object by its memory location.
If at some time an object has no references to it, a program cannot use it, and thus the 'garbage-collector' takes heed. The 'garbage-collector' is done periodically or can be called, but what it does is reclaim the memory that was used by the now 'unreferenced' object.
Hopefully this can be of some aid to you..
Regards, and good luck,
Jonathan W. Zaleski
fred rosenberger
lowercase baba
Bartender
Joined: Oct 02, 2003
Posts: 10798
12
I like...
posted
Nov 12, 2003 07:03:00
0
it gets very confusing... as i think about it, everything is passed by value, but what i really have is a reference to an object. so, the REFERENCE is passed by value.
if i have
myOjbect objRef = new myObject();,
objRef is like an envelope i can use to send mail to the "real" object. i can't touch the real object itself, but i can send and receive messages from it.
when i pass objRef into a function, it's like making a NEW envelop with the same address on it. so, i have passed the VALUE of the original envelope into my function.
this new envelope will now send mail to the same place, can effect the same changes, etc. to the object down below. note that if i change the address on this new envelope, the address on the old one doesn't change.
this is a subtle point. if i use this envelope to change the object, those changes are reflected back in the calling routine when i refer to the object again. but if i change the address on the inner envelope, that does NOT change the address on the outer one - it still points to the same object as before (which may have been changed).
i still get confused on this. i know there are some campfire stories from the main page that help explain this.
hope i didn't confuse you too much...
f
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
KunkalaGuntala Samba Siva Rao
Greenhorn
Joined: Nov 06, 2003
Posts: 20
posted
Nov 12, 2003 07:19:00
0
i guess its just the word that is confusing..
"reference is passed by value" is the correct answer..
this was an old example here..
public class Dog { protected String name; public Dog(String name) { this.name = name; } public static void main(String[] args) { Dog dog = new Dog("Sparky"); dog.printName(); // prints 'Sparky' for obvious reasons // Think of the reference 'dog' as a "remote control" for the object. It // "points" to the object and allows you to interact with the object; in // this case, a Dog. changeNameToBruce(dog); dog.printName(); // prints 'Bruce'; see description in the method... changeReference(dog); dog.printName(); // prints 'Bruce' because we passed a copy // of the reference (pass-by-value) and NOT // the reference itself (pass-by-reference) } public static void changeNameToBruce(Dog dog) { // gets a copy of the reference 'dog', which still points // to the original Dog object, therefore, changes made to the // original object by using this copy of the reference will // "show up" when using the original reference. dog.name = "Bruce"; } public static void changeReference(Dog dog) { // gets a copy of the reference dog, which still points // to the original Dog object, BUT... // we take this copy of the reference and reassign it to // a different Dog object. // This DOES NOT affect the original (non-copy) reference. dog = new Dog("Magic"); } public void printName() { System.out.println(name); }}
KunkalaGuntala Samba Siva Rao
Greenhorn
Joined: Nov 06, 2003
Posts: 20
posted
Nov 12, 2003 07:22:00
0
oops..
subject: how are the objects passed by value or by reference
Similar Threads
Passed!
Passed
java is pass by value????
Passed
Passed By Reference?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/394844/java/java/objects-passed-reference
|
CC-MAIN-2014-10
|
refinedweb
| 901
| 62.78
|
03 January 2012 07:53 [Source: ICIS news]
SINGAPORE (ICIS)--BP has asked its contractor Halliburton to pay for all the costs and expenses it incurred from the 2010 Gulf of Mexico oil spill, media reports citing a ?xml:namespace>
The UK-based oil and giant has set aside $20bn (€15.4bn) for economic claims and natural resource restoration and spent $14bn in the Gulf Coast region in response to the oil spill as of 1 December last year, the company said on its website.
Halliburton was BP’s cement contractor for the Macondo well that suffered a blowout in," Reuters quoted the court filing as saying.
The filing, made by BP's lawyer Don Haycraft in a US Federal court, did not specify a figure on the amount of damages BP is seeking from Halliburton, the Reuters report added.
BP and Halliburton are locked in a legal battle with a trial expected in 2012 to settle damages claims, according to the BBC.
(
|
http://www.icis.com/Articles/2012/01/03/9519594/bp-seeks-gulf-of-mexico-spill-costs-from-halliburton-reports.html
|
CC-MAIN-2014-35
|
refinedweb
| 163
| 62.01
|
My intention for this blog post was to see how fast could I, with basically zero practical cloud experience, deploy a Java application in just that. For this purpose I decided to go for the Azure Cloud Services. Additionally, I made up my mind to also containerize my Java application making use of Docker. This promises me an easier deployment by including all the binaries and libraries needed.
This post is not meant for any Docker/Microsoft Azure expert. But for people like me, who already wrote some Java code and may have peeked a little bit into the Docker topic. Who have never touched “the cloud”, but are interested to see if this is possible without that much background or even if it can be done without spending any money in the first place. To see how you can achieve this in just a few hours of work…
To put it bluntly, it was much easier than I thought it would be. Just to let you know from the start, I did not check what’s the best way to achieve my goal, what the drawbacks are and how to make it quicker. I will just try to give you a summary of what worked well for me. I hope that you can follow along and also deploy your first code in the cloud.
- If you already have a suitable Java project, start with step 2.
- If you already have a dockerized Java application, you can start with step 3.
- If it is crashing for you in step 3, you can give it a try or you may go back to step 1.
- If you already have some Docker container running in Azure, you are in the wrong place here ;-)
Step 1: Let’s have a small and simple Java application
Because I wanted to start quickly with the actual deployment, I decided to go for a new SpringBoot project. However, it should also work with most of the “basic” Java projects you can find out there.
If you have a project yourself, just try it out. If you just don’t feel like making things on your own, here is what I chose to do.
I decided to build a tiny RESTful web service and added the following controller class to the project.
package com.deployJavaOnAzure.playwithazure; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @RestController @RequestMapping("/azure/docker/hello") public class sayHello { @GetMapping public String hello() { return "I'm containerized Java code running in Azure"; } }
To check if this works I also added the following line to the
application.properties file.
server.port=8085
You should now be able to build and run your application. To test this in a browser go to: localhost:8085/azure/docker/hello
Your browser should lie to you now, but hopefully not for long.
If you found this boring, you should have jumped directly to step 2. If this was new for you: congratulations, you run your first SpringBoot RESTful service. If you feel ready, continue to the next level.
Step 2: Put it in a Docker container
Now we want to deploy our application inside a Docker container. “Why one should do such a thing” and how to install Docker on your machine will not be covered here, so I jump right into what I did next. To build a new Docker image we create a
Dockerfile within the project repository.
Along with our application we have to put all needed dependencies and libraries as well. Fortunately, there are plenty of images out there you can build on. All we need is a version of the openjdk image provided by the public repository on DockerHub. I decided to go for the latest alpine release, because it leads to smaller images which will be beneficial especially in the cloud.
Adding our recently created application and opening the port 8085 the
Dockerfile will look as follows:
FROM openjdk:8-alpine ADD build/libs/dockerizeMe.jar dockerizeMe.jar EXPOSE 8085 ENTRYPOINT ["java","-jar","dockerizeMe.jar"]
To get a better name for the build artifact I just add the following lines to the
build.gradle file.
bootJar { archiveName = 'dockerizeMe.jar' }
To build the new image, run the following command in the Powershell (inside your project’s repository):
IN: docker build -t javaimage . OUT: ... some lines skipped here ... Something saying "SUCCESSFULLY BUILD..." should be fine.
To check if we didn’t mess things up, we try to create a container instance from the image we just created.
IN: docker run -p 8085:8085 javaimage OUT: . ____ _ __ _ _ /\ / ___'_ __ _ _(_)_ __ __ _ ( ( )___ | '_ | '_| | '_ / _` | \/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |___, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.2.6.RELEASE)
If you see this output, you did it right.
Again, we can simply test it: localhost:8085/azure/docker/hello
Congratulations! If you still see the message in your browser, now a bit less exaggerating than before, you have a running Java application, this time deployed inside a Docker container.
If you find it tedious to build a local Docker image as in the intermediate step when we only want to run the container in Azure, you should have a look at this alternative way.
Step 3: Finally, deploy your container to the cloud
Last but not least, we can finally take a look at the cloud provider, namely Microsoft Azure. When you visit the website it–hopefully still–tells you that you can have a free Azure account for at least some time. Please create yourself an account by following the instructions there.
Hopefully, you could successfully create an account for yourself. When you now head to the portal home you are greeted by quite an overloaded but simple to use GUI overview. I will show you how to use it to run your first Java code inside the cloud in no time.
Add all the following components to your account by selecting “Create a resource …”.
First you need to create a “Resource group”, where you’ll put all the components that belong together later on.
You only need to select a name and region. Which values you choose doesn’t matter too much for now. For simplicity just keep the default selection. Then add a “Container Registry”, where we can push our recently created Docker image.
It is also possible to skip this step and work without an own “Container Registry” on Azure. You can also use a local repository or some other repository to which you have access to. For example, you can use your DockerHub repository. But then you need to upload the image each time you want to redeploy it.
Here is my selection, which is sufficient for a small example like ours. Please make sure to enable “Admin user” if you want to use the
docker login option later.
To push our image to the right place we first have to tag it accordingly. In the Powershell do:
docker tag javaimage deployjavaregistry.azurecr.io/javaimage
To actually push the image to this registry you need to first login with your account credentials. There are several ways to do this, and some of them seem quite troublesome. I decided to use Azure CLI, which you can either run in the Cloud Shell or install on your machine (that’s what I did). Now you simply log in with your account credentials. This generates a token, which is automatically reused in the subsequent steps. In the shell do:
az login
You can now log in to your “Container Registry” without further authentication. All further Docker commands are now also provided with the access token.
az acr login --name deployjavaregistry
If you don’t want to go for the Azure CLI option, you can also log in with Docker. Having enabled “Admin user” in the “Container Registry” you can get the credentials from “deployjavaregistry/Access keys”.
docker login deployjavaregistry.azurecr.io
Hopefully, you overcome the authentication step and we can push our image to the cloud. In case you have problems with authentication, this page may help you.
docker push deployjavaregistry.azurecr.io/javaimage
This may take some time depending on your connection. Check in “deployjavaregistry/Repositories” if the image safely arrived. If so, we can finally deploy it by choosing the image and select “Run instance”.
Give it a name and select port 8085. After the process finished you should find a new “Container instance” in your “Resource group”. By selecting it, you should be able to get its public IP address.
Let’s see if our code stopped spreading fake news: {IPAddress}:8085/azure/docker/hello
Well done! You have deployed a containerized Java application in Azure!
Now, feel free to play around with your account and image. If you care about the cost, make sure you delete the services you don’t need anymore.
I hope you could follow along and enjoyed the little trip to the cloud. Stay tuned for further posts to come.
Pingback: Möglichkeiten, eine Spring-Anwendung auf der Azure Cloud zu betreiben | techscouting through the java news
|
https://blog.oio.de/2020/05/11/how-to-run-a-containerized-java-application-in-the-cloud-on-microsoft-azure/
|
CC-MAIN-2020-24
|
refinedweb
| 1,540
| 64.71
|
Example of Reflexive composition relation.
Devesh H Rao
Ranch Hand
Joined: Feb 09, 2002
Posts: 687
I like...
posted
Jul 03, 2003 05:05:00
0
Hi ppl,
i just was having a talk with my prof regarding the various types of relation ship between Objects such as aggregation,association composition and their practical examples.
we became stuck for an example of reflexive composition ..
can anybody help out with a practical example of the same
just curious ..!!!
Frank Carver
Sheriff
Joined: Jan 07, 1999
Posts: 6920
posted
Jul 03, 2003 07:29:00
0
Reflexive composition is (as far as I know) used in a class diagram when one object refers to one or more other objects of the same type.
We might design a tree in which each Node holds a reference to a "parent" Node object and zero or more "child" Node objects.
We might design a list in which each Element object holds a reference to the "next" Element object.
We might design a system model in which each Person object holds a reference to a "next of kin" Person object.
And so on. Has that helped?
Read about me at frankcarver.me
~
Raspberry Alpha Omega
~
Frank's Punchbarrel Blog
Devesh H Rao
Ranch Hand
Joined: Feb 09, 2002
Posts: 687
I like...
posted
Jul 03, 2003 23:29:00
0
Hi frank,
thanx for the input
Reflexive Composition
1> The paret.
A example for a composition would be a Thread and process where the process has no existance outside the thread....
i would like a example of reflexive composition where the parent and child are basically from the same class..
The examples u have provided i think do not fulfill the encapsulation req though i may be wrong....
Frank Carver
Sheriff
Joined: Jan 07, 1999
Posts: 6920
posted
Jul 06, 2003 13:32:00
0
1> The parent.
...
i would like a example of reflexive composition
where the parent and child are basically from the
same class..
Ah. I'd missed the nuances of the "composition"
part. Strangely enough I was actually working
with one of these a couple weeks ago, although
there was no UML to show that it was.
In the system I am working on, there are several
"entity" style objects which hold data loaded
from a database (each aggregating the results of
several distinct queries and stored procedure
calls, and providing methods to update the
database from changed information). The original
(horrible) code had a lot of member variables
such as:
String
networkId;
String controllerId;
int activeInterfaces;
Interface[] interfaces;
It also had an identical list of variables
String local_networkId;
String local_controllerId;
int local_activeInterfaces;
Interface[] local_interfaces;
and some clumsy code to copy from one group to
another, initialise the two groups, etc.
The original intention was to separate the
original values (loaded from the database) from
the ones modified by the user, so that when the
object is sent back to the database, only changed
values need be updated.
After several irritating bugs due to mistakes and
misunderstandings in updating this code, we
refactored it so that each class in question had
just one list of member variables, but also had a
reference to another instance of the same class,
created in the constructor. This enabled the
"secondary" object to hold the initial database
values for reference, but not to "get in the way"
of the functioning of the "primary" object.
Making the two objects instances of the same
class ensured that they would always have exactly
the same list of member variables.
I'm not holding this up as a particularly good
design, but given the original state of the code
it is certainly an improvement, and helps prepare
the ground for refactoring to more intelligent
designs later.
Has that helped?
Frank Carver
Sheriff
Joined: Jan 07, 1999
Posts: 6920
posted
Jul 07, 2003 02:04:00
0
Sorry for the cramped format of that last post, I had to use a text-mode browser
Devesh H Rao
Ranch Hand
Joined: Feb 09, 2002
Posts: 687
I like...
posted
Jul 08, 2003 02:45:00
0
//The class and its instance share a parent/child relationship //The class itself is the parent and it contains a static variable //of the type Singleton itself which is private hence is //completely hidden from the client which forms its child //The parent according to the said theory public class Singleton { //The child ... private static Singleton _oSingleton = null; //The constructor being private it cannot be accessed from anywhere but the class method private Singleton() { } // The only method which can create the instance of the object for u. private static void createSingleton(){ if(_oSingleton==null){ _oSingleton = new Singleton(); } } // The method which is exposed to the client and which accesses the parent ie // the Class method and the client is completely unaware of the child. public static String getSomething(){ createSingleton(); return _oSingleton.doSomeThing(); } //The handshake method of the instance of object for the method exposed by the parent ie the class private String doSomeThing(){ return ""; } }
[ July 08, 2003: Message edited by: Devesh H Rao ]
I agree. Here's the link:
subject: Example of Reflexive composition relation.
Similar Threads
Which is better is-a or has-a
How do we implement Composition and Aggregation relation in code ??
Inner Class - UML Question
Q. equals and ==
How will u swap three variable without using temp variable !??
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/98313/patterns/Reflexive-composition-relation
|
CC-MAIN-2014-42
|
refinedweb
| 912
| 57.5
|
Activity
- All
- Work Log
- History
- Activity
- Transitions
default resource strings now provided by InitializerStringResourceLoader
i think this is an ok compromise but not an ideal implementation. imho, the proper way would be to change IResourceStreamLocator methods that return IResourceStream to return IResourceStream[] properly representing the fact that there can be multiple resources on the classpath. but this is a much bigger refactor...
the intializer solution suffers the same problem if two jars contain an initializer with the same name, eg Initializer that happen to be in the same package
not likely, but still possible
also, i think its ok to backport this into 1.5.x and deprecate the old Jar variant.
I had a look at your patch and it looks good.
One observation however: with the previous (flawed) scheme, a resource key namespace was created for each jar. With your modifications, this is no longer the case (i.e. a resource key is looked up in all initializers in whatever order they were added to the application). I don't think this is a problem presently, but some resource keys could be problematic (e.g. NavigatorLabel, UploadStatusResource.status, etc.) because they are somewhat generically named. Should the Wicket built-in resource keys be changed to something prefixed by the name of the module like "wicket-extensions.whatever"?
As for porting the modifications to 1.5, I don't think anybody had time to depend on the old resource loader class your patch replaces, so I'd vote for porting to 1.5.
We have the following problem with the current solution:
All jars in a web application are loaded via the same classloader, thus resulting in naming conflicts, if multiple jars want to provide "wicket-jar.properties". Wicket's resource loading isn't aware of such a situation and I don't think we should change this.
I've replaced the implementation for default resources instead in trunk:
Now IInitalizers can provide resources (see new InitializerStringResourceLoader), working similar as applications specific resources. Furthermore this solution supports resources for non-components too, see UploadStatusResource.
If no one objects, I'll backport this fix to 1.5.x
This logic is located in ResourceStreamLocator. I don't want to change this class (I'm not sure it is even possible to change it this deep into Wicket's resource loading) but I don't want to reimplement resource loading in JarStringResourceLoader neither.
use getResources() instead..
Seems we still don't have a solution
.
ResourceStreamLocator tries to locate "wicket-jar.properties" from classloader:
URL url = classLoader.getResource(path);
But if there are multiple "wicket-jar.properties" files on the class path via the same class loader, it will consider the first found resource only.
Note that this is the case currently only if you run wicket-examples inside Eclipse:
/wicket-core/src/test/java/wicket-jar.properties
/wicket-extensions/src/main/java/wicket-jar.properties
Both resources have the same name and are loaded via the same class loader.
resource loading from wicket-jar is not working. start examples and go to:
get
Caused by: java.util.MissingResourceException: Unable to find property: 'NavigatorLabel' for component: table:topToolbars:toolbars:0:span:navigatorLabel [class=org.apache.wicket.extensions.markup.html.repeater.data.table.NavigatorLabel]
Let's see how many users will be confused
package.properties is not so well known, imo.
meh, now its inconsistent between package.properties and wicket-jar.properties
now i will be trying wicket-package.properties or jar.properties and wonder why it doesnt work...
Added new JarStringResourceLoader to ResourceSettings on last position.
"wicket-jar.properties", fixed tests, use new string resource loader as last.
+1 for wicket-jar.properties in 1.5.x
Please create a task ticket to prefix package.properties (and others if there are more) for Wicket.next.
wicket-jar.properties and all tests working
Sorry, but I need your opinions on this once again:
After pondering about this issue a few days, I now prefer Bertrand's initial suggestion "wicket-jar.properties", as "jar" matches "package" much better.
The fact that these properties are used as defaults isn't inherent with these files, it's a matter of configuration in ResourceSettings.
Additionally I don't think we should defer prefixing these files with "wicket-" namespace until package properties have been migrated. Why not doing it right now for this new feature?
i think simply default.properties will do, at least it matches package.properties.
if we want to namespace these with wicket- we should do it for both, maybe in wicket.next.
No need for another patch, thank you.
Let's wait a little for the opinion of other devs.
Sure "wicket-default.properties" is fine and I agree with your point. wicket-jar emphasizes the "jar" aspect of the feature while wicket-default does as you described. In my opinion both are good candidates, so if you think wicket-default is better, let's settle on that!
Should I send another patch for this change?
I've taken another look at your patch and I think it's working exactly like it should.
For
WICKET-3911 we need another solution, Wicket's string loading is optimized for components, not resources.
One question remains: the file name. IMHO "wicket-default.properties" would stress the fact that these properties are used as defaults only (similar to the term "defaultValue" in Localizer).
WDYT?
In response to the first problem:
In fact this is an intentional feature. Consider the case where I subclass DataTable in my app's jar. I still want to benefit from the strings in wicket-extensions for it.
In response to the second problem:
You are right. Here is what UploadStatusResource uses to lookup a string (simplified):
new StringResourceModel(
"UploadStatusResource.status",
(Component)null,
Model.of(info),
"default value").getString();
Down the line, when the JarStringResourceLoader is called, it doesn't have any practical means of finding which jar should be inspected because the component is null.
I see 2 options, neither of which I like:
1-Walk up the call stack before Localizer and StringResourceModel to find in which jar the getString() call originated. I haven't researched this one much so I am not sure it is even feasible.
2-When the component is null, infer the class whose jar must be inspected based on the resource key. For example, the key from the example above would need to be changed to "org.apache.wicket.extensions.ajax.markup.html.form.upload.UploadStatusResource.status". To allow for "." in the resource key (e.g. status.success), the inferring process would need to walk up the key string along '.' character boundaries.
Is there another way?
Thanks for the patch.
I see two problems at the moment:
- it seems string loaded by JarStringResourceLoader are not isolated, e.g. Component A from jar aa would be able to get strings from jar bb (I'm not sure this is something we want to prevent).
- non-component strings are not supported, i.e. we're not be able to use this enhancement for
WICKET-3911(UploadStatusResource is no component)
I settled on wicket-jar.properties. Here's why I didn't use wicket.jar.properties :
The ResourceNameIterator did not cooperate with the '.' in the "wicket.jar" path. Because of special interpretation of the ".jar" as an extension, tt did not provide a '.' charater at the end of "wicket.jar". So instead of looking up "wicket.jar.properties", "wicket.jarproperties" was used.
The test case for a component located in another jar is missing. Short of including a binary jar as a test resource, I can't see how to build a test case for it. I did however test it with wicket-extensions and it works just fine.
proposed modifications
wicket.jar.properties ?
Use wicket as namespace to avoid collisions with someone else's code.
I agree that this is the way to go: "fallback.properties" or "jar.properties" are my suggestions for file name alternatives.
@Bertrand
Yes, resource keys are no longer isolated. I'm aware of this, but this holds for other resource loaders too.
@Igor
I'd expect each initializer to be in its own package. But you have a point here, this solution could break. It's a compromise, but at least it fits nicely into the current resource loading.
|
https://issues.apache.org/jira/browse/WICKET-4162?focusedCommentId=13136386&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2015-32
|
refinedweb
| 1,386
| 51.14
|
Both Tcl Blend and Jacl support evaluation of events in the Tcl event loop. It is important that developers understand how the Tcl event loop works and how it should be used. A number of Tcl features depend on the Tcl event loop, for example vwait will not work properly if Tcl events are not being processed. The event loop also implements thread synchronization in Tcl, so it must be used correctly in a multi-threaded environment like Java.
The
tcl.lang.Notifier class implements event
loop processing functionality in both Jacl and Tcl Blend.
There is one Notifier object per-thread. A Thread can contain
1 to N Tcl interpreters, each interpreter in a specific
thread will have the same Notifier.
In the most simple case, a Thread would contain a single Tcl interpreter and a Notifier object.
The Notifier manages the Tcl event queue, so if the interpreter were to invoke
after commands
like the following:
after 100 {puts hi}
after 200 {puts bye}
The result would look like:
Tcl events are processed using the
Notifier.doOneEvent()
API. Typically, event processing is done in a loop and the
doOneEvent() method is invoked over and over again.
import tcl.lang.*; public class EventProcessingThread implements Runnable { Interp interp; public void run() { interp = new Interp(); try { while (true) { interp.getNotifier().doOneEvent(TCL.ALL_EVENTS); } } finally { interp.dispose(); } } }
The example above does not queue any events, so when the
doOneEvent() method is invoked the thread
will block waiting for an event to process. A developer
might want to setup an interp like this and then queue
up events to be processed from another thread. The
following example shows how a developer might source a
Tcl file and invoke a Tcl proc defined in that file.
import tcl.lang.*; public class InterpThreadSetup { public static String evalInOtherThread() { EventProcessingThread r = new EventProcessingThread(); Thread t = new Thread(r); t.start(); // Wait for other Thread to get ready while (r.interp == null) { try {Thread.sleep(10);} catch (InterruptedException e) {} } final Interp interp = r.interp; final StringBuffer result = new StringBuffer(); TclEvent event = new TclEvent() { public int processEvent(int flags) { try { interp.eval("source somefile.tcl"); interp.eval("cmd"); result.append( interp.getResult().toString() ); } catch (TclException ex) { // Handle Tcl exceptions here } return 1; } }; // Add event to Tcl Event Queue in other thread interp.getNotifier().queueEvent(event, TCL.QUEUE_TAIL); // Wait for event to be processed by the other thread. event.sync(); return result.toString(); } }
The example above creates a
EventProcessingThread object
and then waits for it to get ready. The
EventProcessingThread
will start and then block inside the
doOneEvent() method.
A new inner class that extends TclEvent is then created and added to
the Tcl event queue via
interp.getNotifier().queueEvent().
Finally, the current thread is blocked waiting for the
EventProcessingThread to process the event. When the
processEvent method is invoked by the
EventProcessingThread,
a Tcl file will be sourced and a Tcl proc named
cmd will be invoked.
These two threads would look like the following:
While the example above may seem complex, each step is required to
ensure that events are processed in the correct order and that
Tcl commands are invoked in a thread safe way. Readers should
note that it is not legal to invoke
interp.eval()
directly from a thread other than the one processing TclEvents
via
doOneEvent(). To do so will cause a crash or
random exceptions. In the example above, the
interp.eval() method is invoked inside the
the
processEvent method and that method is invoked
by a call to
doOneEvent() in
EventProcessingThread.
Using this approach, any number of threads can queue Tcl events
and they will be processed in the correct order.
A single thread can contain 1 to N interpreters. A developer could
create multiple Tcl interpreters in Java code via the
Interp()
constructor. A developer could also use the interp command inside Tcl to create
a "child" interp. The following example uses the interp create command to
show how events in two different interpreters are processed by the same Notifier.
set i2 [interp create i2] $i2 eval { set name "i2" after 100 {puts "hi $name"} after 200 {puts "bye $name"} } set name "main" after 100 {puts "hi $name"} after 200 {puts "bye $name"}
The code above would result in a thread containing two Tcl interpreters, it would look like the following:
Both interpreters share the same
Notifier object, so
events in the Tcl Event queue for this thread would be
processed in the following order:
Tcl Event Queue
after 100 {puts "hi $name"} (in i2 interp)
after 100 {puts "hi $name"} (in main interp)
after 200 {puts "bye $name"} (in i2 interp)
after 200 {puts "bye $name"} (in main interp)
The output of the above code would look like:
hi i2
hi main
bye i2
bye main
|
http://fossies.org/linux/misc/jacl1.4.1.tar.gz:a/jacl1.4.1/docs/Topics/EventLoop.html
|
CC-MAIN-2014-52
|
refinedweb
| 796
| 60.65
|
Deprecated. Please move to.
Hi there, I think I might have misunderstood some concepts of TX in Doobie. I an aggregate with two services which both create an Kleisli. When second return
Left side of either, and thus returning an error, I was expecting the whole thing to end with error and rollback. But, alas, this is not the case.
trait C1 { toF: ConnectionIO ~> F } val k1: Kleisli[F, C1, Either[Error1, Value1] = ??? val k2: Kleisli[F, C2, Either[Error2, Value2] = ??? val r = for { r1 <- k1 // returns Right[Value1]; this is where DB write happens r2 <- k2 // returns Left[Error2] } yield r2 r.run(context)
It was my assumption that Doobie will apply transaction
after only at the end of the world on success.
This is using Hikari TX Pool.
@jarek_rozanski:matrix.org The Discord server of Typelevel may be the best channel nowadays...
When second return Left side of either, and thus returning an error, I was expecting the whole thing to end with error and rollback
Well, in that case, if you are intending to use the
Either in a monadic-like action, you would need the
EitherT data type.
However, before you get into using more transformer types, it may help you to first write what is the behaviour you are looking for, as you would write it with plain functions and
flatMap on F.
def k1(c: C1): F[Either[Error1, Value1] = ??? def k2(c: C2): F[Either[Error2, Value2] = ??? def r(c: Context): F[Either[Error12 , Value2] = k1(c).flatMap { case Left(err1) => F.pure(Left(err1)) case Right(r1) => k2(c).flatMap { case Left(err2) => F.pure(left(err2) case Right(r2) => F.pure(r2) } }
|
https://gitter.im/tpolecat/doobie?at=6163e41ff2cedf67f97afdbd
|
CC-MAIN-2021-49
|
refinedweb
| 281
| 75.91
|
Recently started working with rails; I’m very impressed so far, but I
do have a question -
Say I’m running a blog-type application. I’ve got a Users table, and
a Posts table. Each post on the blog is made by a user. The Post
table has an FK user_id that keeps track of the author of the post.
This is all well and good, but I’ve found myself doing this a lot:
class Post
def get_author_name
user = User.find(user_id)
user.name
end
end
which definitely works for retrieving the author name when I’m
rendering my view, but I’m worried I’m going to be weaving my classes
too tightly, especially given how often this type of scenario can come
up.
Does anyone have a better approach?
Thanks!
|
https://www.ruby-forum.com/t/help-decoupling-models-with-a-foreign-key-relationship/183232
|
CC-MAIN-2022-40
|
refinedweb
| 134
| 70.53
|
# Implementation of Linked List in PHP
A linked list is a linear data structure, which contains node structure and each node contains two elements. A data part that stores the value at that node and next part that stores the link to the next node as shown in the below image:

The first node also known as HEAD is usually used to traverse through the linked list. The last node (next part of the last node) points to NULL. The list can be visualized as a chain of nodes, where every node points to the next node.

[Implementation of Singly Linked List](https://www.alphacodingskills.com/cpp/ds/cpp-linked-list.php)
----------------------------------------------------------------------------------------------------
Representation:
---------------
In PHP, singly linked list can be represented as a class and a Node as a separate class. The LinkedList class contains a reference of Node class type.
```
//node structure
class Node {
public $data;
public $next;
}
class LinkedList {
public $head;
//constructor to create an empty LinkedList
public function __construct(){
$this->head = null;
}
};
```
[Create a Linked List](https://www.alphacodingskills.com/cpp/ds/cpp-linked-list.php)
------------------------------------------------------------------------------------
Let us create a simple linked list which contains three data nodes.
```
php
//node structure
class Node {
public $data;
public $next;
}
class LinkedList {
public $head;
//constructor to create an empty LinkedList
public function __construct(){
$this-head = null;
}
};
// test the code
//create an empty LinkedList
$MyList = new LinkedList();
//Add first node.
$first = new Node();
$first->data = 10;
$first->next = null;
//linking with head node
$MyList->head = $first;
//Add second node.
$second = new Node();
$second->data = 20;
$second->next = null;
//linking with first node
$first->next = $second;
//Add third node.
$third = new Node();
$third->data = 30;
$third->next = null;
//linking with second node
$second->next = $third;
?>
```
[Traverse a Linked List](https://www.alphacodingskills.com/java/ds/java-linked-list-traversal.php)
--------------------------------------------------------------------------------------------------
Traversing through a linked list is very easy. It requires creating a temp node pointing to the head of the list. If the temp node is not null, display its content and move to the next node using temp next. Repeat the process till the temp node becomes null. If the temp node is empty at the start, then the list contains no item.
The function *PrintList* is created for this purpose. It is a **3-step process**.
```
public function PrintList() {
//1. create a temp node pointing to head
$temp = new Node();
$temp = $this->head;
//2. if the temp node is not null continue
// displaying the content and move to the
// next node till the temp becomes null
if($temp != null) {
echo "\nThe list contains: ";
while($temp != null) {
echo $temp->data." ";
$temp = $temp->next;
}
} else {
//3. If the temp node is null at the start,
// the list is empty
echo "\nThe list is empty.";
}
}
```
[Add a new node at the end of the Linked List](https://www.alphacodingskills.com/cs/ds/cs-insert-a-new-node-at-the-end-of-the-linked-list.php)
----------------------------------------------------------------------------------------------------------------------------------------------
In this method, a new node is inserted at the end of the linked list. For example — if the given List is 10->20->30 and a new element 100 is added at the end, the Linked List becomes 10->20->30->100.
Inserting a new node at the end of the Linked List is very easy. First, a new node with given element is created. It is then added at the end of the list by linking the last node to the new node.

The function *push\_back* is created for this purpose. It is a **6-step process**.
```
public function push_back($newElement) {
//1. allocate node
$newNode = new Node();
//2. assign data element
$newNode->data = $newElement;
//3. assign null to the next of new node
$newNode->next = null;
//4. Check the Linked List is empty or not,
// if empty make the new node as head
if($this->head == null) {
$this->head = $newNode;
} else {
//5. Else, traverse to the last node
$temp = new Node();
$temp = $this->head;
while($temp->next != null) {
$temp = $temp->next;
}
//6. Change the next of last node to new node
$temp->next = $newNode;
}
}
```
The below is a complete program that uses above discussed all concepts of the linked list.
```
php
//node structure
class Node {
public $data;
public $next;
}
class LinkedList {
public $head;
public function __construct(){
$this-head = null;
}
//Add new element at the end of the list
public function push_back($newElement) {
$newNode = new Node();
$newNode->data = $newElement;
$newNode->next = null;
if($this->head == null) {
$this->head = $newNode;
} else {
$temp = new Node();
$temp = $this->head;
while($temp->next != null) {
$temp = $temp->next;
}
$temp->next = $newNode;
}
}
//display the content of the list
public function PrintList() {
$temp = new Node();
$temp = $this->head;
if($temp != null) {
echo "\nThe list contains: ";
while($temp != null) {
echo $temp->data." ";
$temp = $temp->next;
}
} else {
echo "\nThe list is empty.";
}
}
};
// test the code
$MyList = new LinkedList();
//Add three elements at the end of the list.
$MyList->push_back(10);
$MyList->push_back(20);
$MyList->push_back(30);
$MyList->PrintList();
?>
```
The output of the above code will be:
```
The list contains: 10 20 30
```
|
https://habr.com/ru/post/506660/
| null | null | 868
| 74.9
|
Feature #14666
nil.any?{} should return false
Description
Hi everyone at ruby/trunk
I encountered
nil.any?
undefined method `any?' for nil:NilClass (NoMethodError)
I fully agree with all of yours,
that
nil should be kept slim.
But than, on the other hand,
the existence quantors are well defined on
nil.
So
nil.any? should always return
false
I know this might make more sense to return
NoMethodError
But in the end
nil is an object,
it's not a null pointer exception any more
We can actually talk with
nil.
Back in the objc days talking to
nil would always return
nil,
Im not sure what happens if
nil answers
false to
any?
(currently it throws an exception, code should not depend on that)
I believe
nil is a deep concept,
and ruby got far ahead with the Nil class
I'd like to suggest that
nil.any?{} should return false
History
Updated by Student (Nathan Zook) about 1 year ago
.any? only makes sense on
Enumerables. There is no end to the methods that we would need to define on
nil if we went this route.
-1
Updated by nobu (Nobuyoshi Nakada) about 1 year ago
any?,
all?, and the family can be defined only on container objects from the meanings.
nil is not a container object.
Updated by eike.rb (Eike Dierks) about 1 year ago
I fully aggree with all the commenters.
It boils down to if nil should be Enumerable.
Obviously it should be not. because nil is special.
But then nil means "not in list" or the empty list or the not existing list?
I know this might break some code.
But the other way around, it might fix a lot more of code.
Please think of what nil means.
It does not mean: just crash immediately whenever you encounter a nil.
Actually nil does have a very well defined semantics in ruby.
I agree, nil has never been Enumerable
because not in list is not a list
maybe I just want to provoke a discussion on this.
so please feel invited to discuss this topic
I'm looking forward to make Enumerables even more orthogonal
because this where ruby really shines
Please excuse if I came up with discussing nil,
all of you are absolutely right that nil should stay the way it is.
(I'm really happy with that)
So later in the end, we might come up
we might come up with a better definition, what nil really is
I'm looking forward for this for ruby 3,
to precisely define all aspects of nil
All of your comments are welcome
Updated by Eregon (Benoit Daloze) about 1 year ago
This could maybe be achieved in user code by adding some kind of Optional/Maybe construct, which could include Enumerable.
Then it would behave either as an empty Array or an Array of one element, based on whether it contains a value.
def Maybe(v) v.nil? ? [] : [v] end Maybe(method_which_might_return_nil()).any? # but also map, each, ...
Updated by shevegen (Robert A. Heiler) about 1 year ago
Let's let nil remain nil rather than maybe become maybe-nil.
Also available in: Atom PDF
|
https://bugs.ruby-lang.org/issues/14666
|
CC-MAIN-2019-26
|
refinedweb
| 532
| 73.47
|
Q3PtrQueue Class Reference
The Q3PtrQueue class is a template class that provides a queue. More...
#include <Q3PtrQueue>
This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information.
Inherits: Q3PtrCollection.
Public Functions
Reimplemented Public Functions
- 4 public functions inherited from Q3PtrCollection
Reimplemented Protected Functions
- 2 protected functions inherited from Q3PtrCollection
Detailed Description
The Q3PtrQueue class is a template class that provides a queue.
Q3ValueVector can be used as an STL-compatible alternative to this class.
A template instance Q33PtrCollection classes, current() and remove() are provided; both operate on the head().
See also Q3PtrList and Q3PtrStack.
Member Function Documentation
Q3PtrQueue::Q3PtrQueue ()
Creates an empty queue with autoDelete() set to FALSE.
Q3PtrQueue::Q3PtrQueue ( const Q3PtrQueue<type> & queue )
Creates a queue from queue.
Only the pointers are copied; the items are not. The autoDelete() flag is set to FALSE.
Q3PtrQueue::~Q3PtrQueue ()
Destroys the queue. Items in the queue are deleted if autoDelete() is TRUE.
bool Q3PtrQueue::autoDelete () const
Returns the setting of the auto-delete option. The default is FALSE.
See also setAutoDelete().
void Q3PtrQueue::clear () [virtual]
Reimplemented from Q3PtrCollection::clear().
Removes all items from the queue, and deletes them if autoDelete() is TRUE.
uint Q3PtrQueue::count () const [virtual]
Reimplemented from Q3PtrCollection::count().
Returns the number of items in the queue.
type * Q3PtrQueue::current () const
Returns a pointer to the head item in the queue. The queue is not changed. Returns 0 if the queue is empty.
See also dequeue() and isEmpty().
type * Q3PtrQueue::dequeue ()
Takes the head item from the queue and returns a pointer to it. Returns 0 if the queue is empty.
See also enqueue() and count().
void Q3PtrQueue::enqueue ( const type * d )
Adds item d to the tail of the queue.
See also count() and dequeue().
type * Q3PtrQueue::head () const
Returns a pointer to the head item in the queue. The queue is not changed. Returns 0 if the queue is empty.
See also dequeue() and isEmpty().
bool Q3PtrQueue::isEmpty () const
Returns TRUE if the queue is empty; otherwise returns FALSE.
See also count(), dequeue(), and head().
QDataStream & Q3PtrQueue::read ( QDataStream & s, Q3PtrCollection::Item & item ) [virtual protected]
Reads a queue item, item, from the stream s and returns a reference to the stream.
The default implementation sets item to 0.
bool Q3PtrQueue::remove ()
Removes the head item from the queue, and returns TRUE if there was an item, i.e. the queue wasn't empty; otherwise returns FALSE.
The item is deleted if autoDelete() is TRUE.
See also head(), isEmpty(), and dequeue().
void Q3PtrQueue::setAutoDelete ( bool enable )().
QDataStream & Q3PtrQueue::write ( QDataStream & s, Q3PtrCollection::Item item ) const [virtual protected]
Writes a queue item, item, to the stream s and returns a reference to the stream.
The default implementation does nothing.
Q3PtrQueue::operator type * () const
Returns a pointer to the head item in the queue. The queue is not changed. Returns 0 if the queue is empty.
See also dequeue() and isEmpty().
Q3PtrQueue<type> & Q3PtrQueue::operator= ( const Q3PtrQueue<type> & queue )
Assigns queue to this queue and returns a reference to this queue.
This queue is first cleared and then each item in queue is enqueued to this queue. Only the pointers are copied.
Warning: The autoDelete() flag is not modified. If it is TRUE for both queue and this queue, deleting the two lists will cause double-deletion of the items.
No notes
|
http://qt-project.org/doc/qt-4.8/q3ptrqueue.html
|
CC-MAIN-2013-20
|
refinedweb
| 577
| 61.33
|
Chapter 1. The First Job In The Cloud
Intro
On a cloud I saw a child,
And he laughing said to me: “...
— William Blake
Nowadays one should live in a cave on an uninhabited island lost in Arctic ocean to have never heard of “Artificial Intelligence,” “Machine Learning,” “NLP” and the family of buzzwords. Having a masters in Data Science, I feel a bit less excited about tomorrow’s AI revolution. That does not mean DS is boring or undue—rather it requires a lot of effort to be put into and I really like that feeling of being doing stuff on the bleeding edge.
As a relatively new industry, ML has not set up the processes yet. I have heard something opposite about Google and Facebook, but we are still considered nerds in small businesses. The role developers used to play twenty years ago. That’s great to see that more and more people are getting into ML, either being excited by Google slides on a last conference, or just being curious whether the neural nets can indeed distinguish between cats and dogs seen on a photo.
Big corps prepare and share (thanks God it’s XXI century) huge datasets, trained models and everything the junior data scientist might use to play in the sandbox. After we made sure that models trained on Google or Facebook data somehow work and even might predict things (in some cases under some very eccentric circumstances, but it’s still so thrilling,) we usually want to try to train our own model ourselves. It takes hours on our own laptop, even despite the dataset is limited to tweets from our forty two friends for the last single year. Results usually look promising, but unsatisfactory. There is no way the laptop could proceed with the whole tweet feed for the last decade without exploding the SDD and blowing up.
That is the time we get to the magical words: cloud calculus. Or how do you name it. Let’s Google servers do explode instead of our lovely laptops, right? Right. Our next job will be in the cloud. Pun intended.
There are not that many resources, explaining how one might stop procrastinating starring at the laptop monitor for when the model is built and start getting benefits of living in the 2018 AD. There are Google ML, Amazon SageMaker, Azure Machine Learning Studio, but the documentation everywhere was written by developers for gray-bearded geeks. There is an enormous threshold to execute the very first job in the cloud. And this writing is supposed to bridge that gap.
That is not a rocket science and there is nothing really complex. Just few steps to make and several things to take into consideration. That’s the breathtaking journey and once done, the subsequent trips will seem a cakewalk. Let’s go.
All the below is written for Google ML Engine, but it might be applied to any cloud computing system almost as is. I will try not to go deeply into details, concentrating more on whats rather than on hows.
Before We Start
First of all, I want to reference the paper that helped me a lot to move my job into the cloud. Tensorflow beginner guide by Fuyang Liu's Blog is almost perfect, save for it does not cover pitfalls and does not suggest shortpaths where it could have made sense.
Google also has a documentation on ML Engine, I wish I were as smart as to use it as a guide. We still need it though to quickly look up this and that.
First we need to set up our cloud environment. I refer to Google guide here because things tend to change within time and I hope they will keep this info up-to-date.
After we have the account enabled for ML, we should set up our local environment. I strongly advise using Linux, MacOS is more or less robust, Windows will make you cry. Once we are to run jobs in the cloud, I believe you have python installed and configured. What we need to install is Google SDK. It’s pretty straightforward though, download it from the page linked and install.
Now we need to setup our credentials.
gcloud init should do.
Let’s check it works as expected:
$ gcloud ml-engine models list Listed 0 items.
Wow. We are all set.
Our First Job
That is the important part. Don’t try to upload and run your fancy last project. It’ll fail and you’ll get frustrated. Let’s enter cold water slowly. Let’s make your first job completed successfully, showing a fascinating green light icon when you’ll check your jobs status.
The cloud expects the python package to be uploaded and the main module to execute it specified. So, let’s go with a pretty simple python package. Let’s assume it’s named
test1.py and resides in the directory named
test1.
# coding: utf-8 import logging import argparse if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( '--job-dir', help='GCS job directory (required by GoogleML)', required=True ) parser.add_argument( '--arg', help='Test argument', required=True ) arguments = parser.parse_args().__dict__ job_dir = arguments.pop('job_dir') arg = arguments.pop('arg') logging.info("Hey, ML Engine, you are not scary!") logging.warn("Argument received: {}.".format(arg))
We use
logging because unlike simple
stdout logs are available through the web interface.
Also you’ll need a cloud configuration file on your local. It might be placed everywhere, I prefer to have a config file per project. Put
test1.yml in the same directory:
trainingInput: scaleTier: CUSTOM # 1 GPU masterType: standard_gpu # 4 GPUs # complex_model_m_gpu runtimeVersion: "1.9" pythonVersion: "3.5"
I am not sure who took that decision, but the default
python version for ML Engine is
2.7, that’s why two last lines are mandatory.
Also you would need to create a file
setup.py, containing the description of our project. It will be processed by Google SDK.
from setuptools import find_packages from setuptools import setup setup( name='test1', version='0.1', description='My First Job' )
Well, that is it. Let’s try (this file
test1.sh should be on the same level as the package folder.)
#!/bin/bash export BUCKET_NAME=foo-bar-baz-your-bucket-name export REGION=us-east1 export JOB_NAME="test1_$(date +%Y%m%d_%H%M%S)" export JOB_DIR=gs://$BUCKET_NAME/$JOB_NAME gcloud ml-engine jobs submit training $JOB_NAME \ --staging-bucket gs://$BUCKET_NAME \ --job-dir gs://$BUCKET_NAME/$JOB_NAME \ --region $REGION \ --runtime-version 1.9 \ \ --module-name test1.test1 \ --package-path ./test1 \ --config=test1/test1.yaml \ -- \ --arg=42
NB! you have to specify your bucket name and you might need to change the region as well.
I strongly advise to create a shell script to run (schedule/queue) a job from the very beginning. It’s much easier to tackle with when it comes to modifications.
There are three ‘subsections’ of arguments there: first four are job-specific and remain unchanged from job to job. The second is job-specific settings. The third one (after
--) contains argument that will be passed to the
__main__ function of your package.
Go try it:
./test1.sh Job [test1_20180818_085812] submitted successfully. Your job is still active. You may view the status of your job with the command $ gcloud ml-engine jobs describe test1_20180818_085812 or continue streaming the logs with the command $ gcloud ml-engine jobs stream-logs test1_20180818_085812 jobId: test1_20180818_085812 state: QUEUED
Now you might execute
gcloud ml-engine jobs describe ... as suggested. It’ll spit out another portion of text. Copy the last link and paste in into your browser address line. You should see...
What should you see there I will describe in the next chapter. Happy clouding!
Discussion (5)
If you think that there's no process to ML now, imagine what we had to work with in the late 90s & early 2000s! I spent more than a decade doing NLP/ML/AI, and I actually left that area because I was disillusioned by how little the technology matured in that decade. Now that I've left it behind, it's exploded in popularity. Ah, well.
And I just want to add how excited I was to see a William Blake quote lead off a DEV article. :)
Thank you, Jason.
I think I know exactly what you are talking about, since I started with NLP in early 2000s. Well, it was computational linguistics. I am just very persistent (or stubborn :)).
I am not in the ML field.
Is it mandatory to rely on the cloud service of a GAFA to build a model in a reasonable amount time? If yes, I find this very concerning.
Also don’t you think that your model training could be saved by Google that can reuse it if it pleases it without you knowing about it?
Super great first post Yulia!
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/yzvyagelskaya/your-first-job-in-the-cloud-4lcj
|
CC-MAIN-2021-25
|
refinedweb
| 1,484
| 74.39
|
Re: [soaplite] Adding namespaces to SOAP-ENV::Envelope
Expand Messages
- On Wed, 26 Nov 2003, Byrne Reese wrote:
> This is an excellent question - something I will have to write about... atusing call() gives you a lot more flexibility.
> least more explicitly on majordojo.com. Future versions of SOAP::Lite will
> make this easier. Here is how *I* would address the problem:
>
>
see also
which is very helpful, I have used this technique successfully to set the
namespace of the envelope, because ebXML-MS requires you specify the
namespace of things like ebXML, xlink, etc in the Envelope, Header and
Body.
Unfortunately there is still now way to specify a namespace in the
SOAP-BODY or SOAP-Header.
Any help on munging these without writing my own serializer would be much
appreciated.
cheers,
A.
p.s. I have uploaded version 0.6 of SOAP::Data::Builder to CPAN, it is now
more OO and doesn't churn out pages of debug info anymore. Patches welcome
:)
--
Aaron J Trevena - Perl Hacker, Kung Fu Geek, Internet Consultant
AutoDia --- Automatic UML and HTML Specifications from Perl, C++
and Any Datasource with a Handler.
Your message has been successfully submitted and would be delivered to recipients shortly.
|
https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/3188?o=1&unwrap=1&var=1
|
CC-MAIN-2017-17
|
refinedweb
| 202
| 53.92
|
ntfsmount - Read/Write userspace NTFS driver.
ntfsmount device mount_point [-o options] mount -t fuse.ntfs device mount_point [-o options] /etc/ftsab entry: device mount_point fuse.ntfs options 0 0
ntfsmount is a read/write userspace NTFS filesystem driver. Technically it connects FUSE with libntfs. ntfsmount features: · Create/Delete/Move files and directories. · Hard link files. · Read and write to normal and sparse files. · Read compressed and encrypted files. · Access to special Interix files (symlinks, devices, FIFOs). · List/Read/Write/Add/Remove named data streams. · Supports Linux, FreeBSD, NetBSD and Mac OS X.
ntfsmount supports most of options that mount and FUSE accepts (see "man 8 mount" and FUSE documentation for them). Additionally ntfsmount have some unique to it options, below is a summary of them. silent, nosilent silent option makes ntfsmount to do not return "Operation is not supported" error on chmod and chown operations (this option is on by default). nosilent cancels this. locale=value You can set locale with this option. It’s useful if locale environment variables are not set before partitions from /etc/fstab had been mounted. Try submitting this option if you are experience problems with displaying national characters in filenames. uid=value, gid=value Set the owner and the group of files and directories. The values are numerical. The defaults are the uid and gid of the current process. umask=value, dmask=value, fmask=value Set the bitmask of the file and directory permissions that are not present. The value is given in octal. Instead of specifying umask which applies both to files and directories, fmask applies only to files and dmask only to directories. case_insensitive Make ntfsmount treat filenames in POSIX names as case insensitive. See FILENAME NAMESPACES section for details. no_def_opts By default ntfsmount acts as some useful options were passed to it (you can get list of this options by running ntfsmount without any arguments). Submitting this option will cancel such behaviour. noblkdev By default ntfsmount tries to mount block devices with blkdev FUSE option if it have enough privileges. Submit this option if blkdev mount does not work for you for some reasons. force Force mount even if errors occurred. Use this option only if you know what are you doing and don’t cry about data loss. relatime, norelat.) streams_interface=value This option controls how the user can access named data streams. It can be set to, one of none, windows or xattr. See DATA STREAMS section for details. debug Makes ntfsmount to not detach from terminal and print a lot of debug output from libntfs and FUSE. no_detach Same as above but with less debug output.
There are exist several namespaces for filenames in NTFS: DOS, Win32 and POSIX. Names in DOS and Win32 namespaces are case insensitive, but names in POSIX namespace are case sensitive. By default windows creates filenames in DOS and Win32 namespaces (with exception for hard links), but ntfsmount always creates files in POSIX namespace. Note: you can create several files that differs only in case in one directory with ntfsmount, but windows applications may be confused by this.
All data on NTFS is stored in streams. Every file has exactly one unnamed data stream and can have many named data streams. The size of a file is the size of its unnamed data stream. Windows applications don’t, consistently, allow you to read named data streams, so you are recommended to use tools like FAR, or utilities from Cygwin. If streams_interface option is set to xattr, then the named data streams are mapped to xattrs and user can manipulate them using getfattr and setfattr utilities. Eg.: setfattr -n user.artist -v "Some Artist" some.mp3 getfattr -d some.mp3
Win32 does not allow characters like ’<’, ’>’, ’*’, ’?’ and so on in the filenames, but NTFS supports any characters except ’\0’ (NULL) and ’/’. You can create filenames with any allowed by NTFS characters using ntfsmount, but aware, you will not be able to access files with denied by Win32 characters from windows.
By default, files and directories are owned by thetfsmount.
Mount /dev/hda1 to /mnt/ntfs using ntfsmount submiting locale option: ntfsmount /dev/hda1 /mnt/ntfs -o locale=be_BY.UTF-8 /etc/fstab entry for above: /dev/hda1 /mnt/ntfs fuse.ntfs locale=be_BY.UTF-8 0 0 Umount /mnt/ntfs: fusermount -u /mnt/ntfs
If you find a bug please send an email describing the problem to the development team: linux-ntfs-dev@lists.sourceforge.net
ntfsmount was written by Yura Pakhuchiy, with contributions from Yuval Fledel and Szabolcs Szakacsits.
With love to Marina Sapego.
Many thanks to Miklos Szeredi for advice and answers about FUSE.
ntfsmount is part of the ntfsprogs package and is available from: The manual pages are available online at: Additional up-to-date information can be found furthermore at:
Read libntfs(8) for details how to access encrypted files. libntfs(8), ntfsprogs(8), attr(5), getfattr(1)
|
https://huge-man-linux.net/man8/ntfsmount.html
|
CC-MAIN-2021-39
|
refinedweb
| 818
| 58.99
|
There is an increasing demand for running and managing cloud native applications on a platform that is able to handle the surge in traffic in a scalable and performant manner.
OpenShift is the go-to platform. There are a number of features being added in each release to meet the ever-changing needs in various domains, including the most recent edge computing. In the previous blog post, we looked at the highlights of the OpenShift 4.3 scale test run at 2000 node scale. In this blog post, we will look at how we, the Performance and Scalability team at Red Hat, pushed the limits of OpenShift 4.5 at scale.
What’s New in OpenShift 4.5?
OpenShift 4.5 is based on Kubernetes 1.18 and it supports installation on vSphere with full stack automation experience, in addition to other cloud providers. The most awaited API Priority and Fairness feature, which prioritizes system requests over application requests to improve the stability and scalability of OpenShift clusters, is enabled by default. This release includes many other features, bug fixes, and enhancements for various components, including Install, Upgrades, Machine API, Cluster Monitoring, Scale, Cluster Operators, and Disaster Recovery as discussed in the release notes. A single blog post will not be enough to walk through all the features in OpenShift 4.5. Let’s jump straight into the scale test run, where the goal was to see if there was a regression in terms of Performance and Scalability when compared to the previous releases, this time with OpenShift SDN and OVNKube as the network plugins. Based on these tests, we provided recommendations to help with running thousands of objects on large-scale clusters.
OpenShift 4.5 Scale Test Run
We started gearing up for the mission after the stable builds with the 4.5 bits were ready. This time, Cerberus acted as the guardian by monitoring the cluster health, alerting us on the Slack channel, and stopping the workload from generating invalid data, which could drive the cluster into an unrecoverable state. With this automation, we did not have to manually monitor the cluster's health status regularly. We added a bunch of features to Cerberus to improve the health checks coverage as well as code changes to reduce the number of API calls to the server and multithreading to speed up the process. The great blog post by Yashashree Suresh and Paige Rubendall covers all the enhancements in detail. The preparation also included making sure the dashboards are graphing all the metrics of interest similarly to what we did for the previous scale test runs. A couple of the metrics include critical alerts, ApiServer Queries/Requests per second, Inflight API requests, Etcd backend DB size, Etcd leader changes (they are quite disruptive when the count is high), resource usage, and fsync (critical for Etcd read and write operations to be fast enough). We also have a new dashboard that tracks the metrics related to API Priority and Fairness.
We started the runs with base clusters consisting of three Masters/Etcds, three Infrastructure nodes to host the Ingress, Registry and Monitoring stack, and three worker nodes and went on to scale up the cluster to 500 nodes. One cluster was using OpenShift SDN as the network plugin while the other was using OVNKube as one of the goals this time was to compare them and find out the bottlenecks. The Performance and Scale tests were run at various node scales (25/100/250/500) as shown below:
Faster Installs
The first stop is at the install. With the introduction of Cluster Etcd Operator, the installs are much faster due to improvements in the bootstrapping process when compared to the OpenShift 4.3 and prior releases. Let’s get into the details as to what changed:
OpenShift 4.3 and Prior Releases
Bootstrap node boots up master nodes on which it installs and waits for a production three node Etcd cluster to form a temporary control plane on bootstrap node before rolling out the production control plane including API server, cluster operators etc. on master nodes.
OpenShift 4.4 and Later Releases
Etcd is started at an early stage on bootstrap node and is used to create a temporary control plane which is then used to roll out the production control plane including API server, operators, and Etcd much earlier. Also, the Cluster Etcd Operator keeps scaling the production Etcd replicas in parallel during the process leading to faster installs.
Infrastructure Node Sizing
Infrastructure components include Monitoring (Prometheus), Router, Registry and Logging (Elasticsearch). It’s very important to have enough resources for them to run with getting OOM killed, especially on large and dense clusters. Worker nodes in a typical cluster might not be big enough to host the infrastructure components, but we can overcome it by creating Infrastructure nodes using custom MachineSets, and the components can be steered onto them as documented.
This all sounds good, but what instance types or how much resources are needed, one might ask? Prometheus is more memory-intensive than CPU-intensive, and it is the highest resources consumer on the Infrastructure nodes. It is expected to use more than 130GB of memory on a large and dense cluster: 500 nodes, 10k namespaces, 60k pods, 190k secrets, etc.
Taking all the data from performance and scalability tests into consideration, here are the sizing recommendations at various node counts. It is recommended to use machines with 32 CPU cores and 192GB memory to host infrastructure components. Note that the cores and memory are mapped to closest instance types that are typically available to be used in public clouds.
Logging stack: Elasticsearch and Fluentd can be installed as part of day two operation, and it’s recommended to steer Elasticsearch onto the infrastructure nodes as well. Note that the recommendations above do not take the Logging stack into account for sizing.
It is recommended to use memory-optimized instances than general or cpu-optimized to save on costs in cases of public clouds as the infrastructure components are more memory intensive than CPU intensive.
Scale Up and Scale Down
Once we had the base cluster ready, we started scaling up the cluster and ran the Performance and Scalability tests at the appropriate node counts. We scaled down the cluster to lower node counts whenever possible to avoid burning down the cash and observed that it might take large amounts of time when scaling down a cluster with nodes hosting a large number of applications. Nodes are drained to relocate the applications before terminating them, and this process might generate a high number of requests since it’s done in parallel and client’s’ default QPS/Burst rates (5/10) might cause throttling leading to increase in the time. We are working on making it configurable.
Cluster Maximums
Cluster maximums are an important measure of knowing when the cluster will start experiencing performance and scalability issues and in the worst case become unstable. It is the point where the user and customer should stop pushing the cluster. We have tested and validated the cluster maximums including number of nodes, pods per node, pods per namespace, services per namespace, number of namespaces etc. to help users and customers plan their environments accordingly. Note that these maximums can be achieved, provided the environment is similar in terms of cloud platform, node type, disk size, disk type, and IOPS, especially for Masters/Etcd nodes since Etcd is I/O intensive and latency sensitive. The tested cluster maximums as well as the environment details are documented as part of the Scalability and Performance Guide.
We observed that 99 percentile of the requests to the API took less than 1 second when the cluster is loaded with around 10k namespaces, 60k pods, 190k secrets, 10k deployments, and hundreds of ConfigMaps and Secrets to hit the maximums. The API server and Etcd were stable all the time due to appropriate defaults including the QPS/Burst rates on the server side and backend quota in case of Etcd. Alerts are a good way to understand if there is something wrong with the cluster. There were no critical alerts observed with the cluster in this state.
Can we push beyond the documented services, one might ask? Users and customers might have a need to run more than 5000 services per namespace. It’s not possible to do it when using environment variables for service discovery: default in Kubernetes and OpenShift today. For each active service in the cluster, the kubelet injects environment variables into the pods, and after 5000 services, the argument length gets too long, leading to pod/deployment failures. Is there any other way for the pods to discover the services? The answer is yes, and the solution is using a cluster-aware DNS. The service links can be disabled when using DNS in the pod/deployment spec file to avoid using environment variables for service discovery which will allow us to go beyond 5000 services per namespace as documented.
The documented cluster limits for the major releases are as documented below:
Upgrades
Over-the-air upgrades are one of the most looked at features that come with the switch to OpenShift 4.x from 3.x. OpenShift 4.x uses immutables OS, and ssh to the nodes is disabled to avoid altering any files or configuration at both cluster as well at node level.
We were able to successfully upgrade a loaded cluster running OpenShift 4.4.13 bits to OpenShift 4.5.4 at 250 node scale using the stable channel. Note that upgrading to 4.4.13 is a prerequisite/recommended to get a stable upgrade to OpenShift 4.5.x.
The DaemonSets are rolled out in serial with maximum unavailability set to one by default, causing the upgrades to take long amounts of time on a large-scale cluster since there a replica runs on each of the nodes. Work is in progress to upgrade 10% of the replicas in parallel for all the DaemonSets, which do not impact the cluster/applications when multiple replicase like node-exporter, node-tuning, and machine-config daemons are being upgraded at once. This should significantly improve the timing of the cluster operators upgrades on a large-scale cluster.
The builds part of the stable-4.5 channel can be found here. During the upgrades, the Cluster Version Operator (CVO) in the cluster checks with the OpenShift Container Platform update service to see the valid updates and update paths based on current component versions. During the upgrade process, the Machine Config Operator (MCO) applies the new configuration to the cluster machines. It cordons the number of nodes that are specified by the maxUnavailable field on the machine configuration pool, so they are drained. Once the drain is finished, it applies the new configuration and reboots them. This operation is applied on a serial basis, and the number of nodes that can be unavailable at a given time is one by default, meaning it is going to take a long time when there are hundreds or thousands of nodes. That is because upgrading each node would need to drain and reboot each node after applying the latest ostree layer. Tuning the maxUnavailable parameter of the worker’s MachineConfigPool should speed up node upgrade time. We need to make sure to set it to a value that avoids causing disruption to any services.
Other Findings
We ran a number of tests to look at the Performance and Scalability of various components of OpenShift, including Networking, Kubelet, Router, Control Plane, Logging, Monitoring, Storage, concurrent builds, and Cluster Maximums. Here are some of the findings from the cluster with OpenShift SDN as the network plugin, which is the default unless compared with OVNKube:
- In OpenShift 4.5, the HAProxy used for Ingress has been upgraded to 2.0.14 and it provides a router reload performance improvement beneficial for clusters with thousands of routes. It provides connection pooling and threading improvements thus allowing faster concurrent I/O operations. We observed that the throughput in terms of requests per second is better in case of clusters with OpenShift SDN than OVNKube which led us to create the bug. We are actively working on fixing it.
- No major regression in terms of networking with OpenShift SDN and OVNKube as the network plugin (default) when compared with OpenShift 4.4.
- OpenShift Container Storage v4.5 is stable and scales well. With OCS v4.5, we can achieve higher density of pods with persistent volume claims (PVCs) per node than for cases when cloud provider storage classes are used as storage providers for applications. Also, deleting PVC (and backend storage) is fast and reliable.
- Cluster logging in OpenShift Container Platform 4.5 now uses Elasticsearch 6.8.1 as the default log store.
- The control plane, or rather API server and Etcd, is stable with thousands of objects and nodes running in the cluster. It is important to use the disks with low fync timings for Masters and Etcds nodes as it’s critical for Etcd’s performance. Etcd has been bumped to 3.4 and it comes with features including non-blocking concurrent read as well as lease lookup operations, improved raft voting process, and new client balancer. is a great source for knowing various enhancements committed in Etcd 3.4 version.
- The pod creation rate is seen to be better in case of clusters with OpenShift SDN as the network plugin when compared with OVNKube.
- Builds build and push operation are stable, and there is not much regression when compared to the previous releases.
We are actively working on improving the performance and scalability of clusters with OVNKube as the networking solution in addition to OpenShift SDN.
Refer to the Scalability and Performance Guide for more information on how to plan the OpenShift environments to be more scalable and performant.
What’s Next?
Stay tuned for the upcoming blog posts on more tooling and automation enhancements, including introducing chaos experiments into the test suite to ensure the reliability of OpenShift during turbulent conditions, as well as highlights from the next large-scale test runs on clusters ranging from 250 to 2000 nodes. As always, feel free to reach us out on Github:, or sig-scalability channel on Kubernetes Slack.Try OpenShift 4 for free
Categories
Kubernetes, How-tos, cloud scale, massive scale, scaling, OpenShift 4, OpenShift 4.5
|
https://www.openshift.com/blog/openshift-scale-ci-part-6-highlights-from-the-openshift-4.5-large-scale-test-run
|
CC-MAIN-2021-17
|
refinedweb
| 2,408
| 59.64
|
--- Nicodemus <nicodemus at globalite.com.br> wrote: > >#ifdef _MSC_VER > >#pragma hdrstop > >#endif > > > It is in CVS now (it only writes the pragma and the surrounding #ifdef > on windows systems). Not that I am using Pyste at the moment, but if I did I'd want to reuse the Pyste output on platforms that don't have gcc-xml. I.e. under Windows I am exclusively working with Visual C++ and I don't even have plain gcc available. Therefore it would be nice if the Pyste output could be exactly the same on all platforms. -- Too much wishful thinking? Ralf __________________________________ Do you Yahoo!? The New Yahoo! Shopping - with improved product search
|
https://mail.python.org/pipermail/cplusplus-sig/2003-October/005479.html
|
CC-MAIN-2018-05
|
refinedweb
| 112
| 76.52
|
I'm working on the practice problems in the Jumping into C++ book and I'm running into a segmentation fault when running my program.
The problem is as followed: "Modify the program you wrote for exercise 1 so that instead of always prompting the user for a last name, it does so only if the caller passes in a NULL pointer for the last name."
The problem they are talking about is: be similar to the swap function from earlier!)"
Here is my solution to the new problem:
The program allows me input the first name and then it seg faults. I'm a bit confused and would really appreciate if someone can point me into the right direction. Thanks!The program allows me input the first name and then it seg faults. I'm a bit confused and would really appreciate if someone can point me into the right direction. Thanks!Code:#include <iostream> #include <string> using namespace std; string name(string *pFname, string *pLname); int main(){ string firstName; //string lastName= NULL; //cout << "Nice to meet you " << name(&firstName, &lastName); cout << "Nice to meet you " << name(&firstName, NULL); return 0; } string name(string *pFname, string *pLname){ cout << "Please enter your first name: "; getline(cin, *pFname); if(pLname){ cout << "Please enter your last name: "; getline(cin, *pLname); } string fullName = (*pFname) + " " + (*pLname); return fullName; }
|
https://cboard.cprogramming.com/cplusplus-programming/176861-need-more-help-pointers-post1282277.html?s=7464a918d3b5d913dbc0a10c5dcbedc4
|
CC-MAIN-2021-10
|
refinedweb
| 224
| 61.7
|
Electronic Art
September 9, 2016 Leave a comment
I had been wanting to do this for such a long time…
September 9, 2016 Leave a comment
I had been wanting to do this for such a long time…
Filed under Arduino, Electronics Tagged with arduino, atmega328, electronics, frame, i2c, LED, opencv, peggy 2.2, peggy board, Python, raspberry pi
March 26, 2016 Leave a comment
Same as the previous post, this is an old project that still doesn’t work properly, but will probably never have the time to fix, so I’ve decided to post the details here, in case I need to refer to them later.
Filed under 3Dprint, Arduino, Bluetooth, Electronics, Robots Tagged with 3d print, accelerometre, arduino, gyroscope, imu, MPU6050, robot, self balanced
March 26, 2016 Leave a comment
This is a quick post to dump some pictures and lessons learneth the hard way, when I tried to build a nicer / better self balancing robot.
All this dates back from October 2014, so more than 1 year ago, but I finally gave up on trying to make this work and wanted to keep all this info/details for my own reference.
Filed under 3Dprint, Arduino, Electronics, Robots Tagged with 3D printing, arduino, robot, self-balancing
October 25, 2015 5 Comments
I went back to try and finish this ball balancing project, but after some improvements I started to get frustrated by the fact that the servos kept being “jittery”.
Time for my new and shiny servo tester I thought… ! It turned out I was seeing the same problem.
Filed under Arduino, Electronics, RC Tagged with arduino, electronics, MCU, microcontrollers, oscilloscope, pwm, RC Servo, servo, timer.
January 5, 2014 149 Comments
It’s really been a very long time since I added this on my todo list, and in the past couple of week-ends I finally found the time to do it.
April 17, 2013…
May 31, 2011 3 Comments
I’ve just spend a couple of hours preparing this ultra-sonic ranger and the necessary code to integrate it into my autonomous tank project.
Here’s a quick post briefly explaining the theory and providing some Arduino code for it.
Filed under Arduino, Electronics Tagged with ranger, srf05, ultra sonic
This post shows how to extract the IR Camera from the WiiMote and connect it to a .NET micro framework board (FEZ Domino in my case).
The present blog is simply a porting of the C# code to Arduino code, in case anybody is interested…
It’s a very basic file, really showing just the minimum necessary to get it working.
#include <Wire.h> const byte ADDR_SENSOR = 0xB0 >> 1; byte buff[2]; byte recvBuff[13]; int x, y, s; void setup(){ Serial.begin(115200); Wire.begin(); send(0x30, 0x01 ); send( 0x30, 0x08 ); send( 0x06, 0x90 ); send( 0x08, 0xC0 ); send( 0x1A, 0x40 ); send( 0x33, 0x03 ); send( 0x30, 0x08 ); delay(100); } void loop(){ readData(); Serial.print(x); Serial.print(" / "); Serial.print(y); Serial.print(" / "); Serial.println(s); delay(300); } void readData() { send( 0x36 ); Wire.requestFrom(ADDR_SENSOR, (byte)13); for(byte i=0; i<13; i++) recvBuff[i] = Wire.receive(); // have no idea why the 1st BLOB start at 1 not 0.... byte offset = 1; x = recvBuff[offset]; y = recvBuff[offset + 1]; int extra = recvBuff[offset + 2]; x += (extra & 0x30) << 4; y += (extra & 0xC0) << 2; s = (extra & 0x0F); } void send(byte val){ Wire.beginTransmission(ADDR_SENSOR); Wire.send(val); Wire.endTransmission(); delay(10); } void send(byte val1, byte val2){ Wire.beginTransmission(ADDR_SENSOR); buff[0] = val1; buff[1] = val2; Wire.send(buff, 2); Wire.endTransmission(); delay(10); }
May 2, 2011 6 Comments
It’s all about the same project as here, BUT with a different technology: instead of using the FEZ Domino (.NET micro framework based) board, I’ll be using the Lego NXT controller and an Arduino board for low level interaction with the electronics.
Filed under Arduino, Electronics, Embedded MCU, Lego Mindstorms, RC, Robots Tagged with 1/16, airsoft, arduino, atmega, atmega168, bb, i2c, java, lego, lejos, motor controller, nxt, ppm, tank
|
https://trandi.wordpress.com/category/embedded-mcu/arduino-embedded-mcu/
|
CC-MAIN-2017-26
|
refinedweb
| 674
| 55.88
|
A simple model of a forest fire is defined as a two-dimensional cellular automaton on a grid of cells which take one of three states: empty, occupied by a tree, or burning. The automaton evolves according to the following rules which are executed simultaneously for every cell at a given generation.
The model is interesting as a simple dynamical system displaying self-organized criticality. The following Python code simulates a forest fire using this model for probabilities $p = 0.05$, $f = 0.001$. A Matplotlib animation is used for visualization.
Update (January 2020): As noted by Casey (comment, below), the simulation isn't very realistic for small $f$: fires in a dense forest tend to expand out in a square pattern because diagonally-adjacent trees catch fire as readily as orthogonally-adjacent trees. This can be improved by assigning a probability of less than 1 for these trees to ignite. It isn't obvious what probability to assign, but I use 0.573 below. This is the ratio $B/A$, where $B$ is the area of overlap of a circle of radius $\frac{3}{2}$ at (0,0) (the tree on fire) with one of radius $\frac{1}{2}$ at $(0,\sqrt{2})$ (blue, the diagonally-adjacent tree), and $A$ is the overlap of this first circle with one of radius $\frac{1}{2}$ at $(0,1)$ (red, the orthogonally-adjacent tree). $A$ is equal to the "area" of this adjacent tree, since its circle is contained in the first.
This code is also available on my github page.
import numpy as np import matplotlib.pyplot as plt from matplotlib import animation from matplotlib import colors # Create a forest fire animation based on a simple cellular automaton model. # The maths behind this code is described in the scipython blog article # at # Christian Hill, January 2016. # Updated January 2020. # Displacements from a cell to its eight nearest neighbours neighbourhood = ((-1,-1), (-1,0), (-1,1), (0,-1), (0, 1), (1,-1), (1,0), (1,1)) EMPTY, TREE, FIRE = 0, 1, 2 # Colours for visualization: brown for EMPTY, dark green for TREE and orange # for FIRE. Note that for the colormap to work, this list and the bounds list # must be one larger than the number of different values in the array. colors_list = [(0.2,0,0), (0,0.5,0), (1,0,0), 'orange'] cmap = colors.ListedColormap(colors_list) bounds = [0,1,2,3] norm = colors.BoundaryNorm(bounds, cmap.N) def iterate(X): """Iterate the forest according to the forest-fire rules.""" # The boundary of the forest is always empty, so only consider cells # indexed from 1 to nx-2, 1 to ny-2 X1 = np.zeros((ny, nx)) for ix in range(1,nx-1): for iy in range(1,ny-1): if X[iy,ix] == EMPTY and np.random.random() <= p: X1[iy,ix] = TREE if X[iy,ix] == TREE: X1[iy,ix] = TREE for dx,dy in neighbourhood: # The diagonally-adjacent trees are further away, so # only catch fire with a reduced probability: if abs(dx) == abs(dy) and np.random.random() < 0.573: continue if X[iy+dy,ix+dx] == FIRE: X1[iy,ix] = FIRE break else: if np.random.random() <= f: X1[iy,ix] = FIRE return X1 # The initial fraction of the forest occupied by trees. forest_fraction = 0.2 # Probability of new tree growth per empty cell, and of lightning strike. p, f = 0.05, 0.0001 # Forest size (number of cells in x and y directions). nx, ny = 100, 100 # Initialize the forest grid. X = np.zeros((ny, nx)) X[1:ny-1, 1:nx-1] = np.random.randint(0, 2, size=(ny-2, nx-2)) X[1:ny-1, 1:nx-1] = np.random.random(size=(ny-2, nx-2)) < forest_fraction fig = plt.figure(figsize=(25/3, 6.25)) ax = fig.add_subplot(111) ax.set_axis_off() im = ax.imshow(X, cmap=cmap, norm=norm)#, interpolation='nearest') # The animation function: called to produce a frame for each generation. def animate(i): im.set_data(animate.X) animate.X = iterate(animate.X) # Bind our grid to the identifier X in the animate function's namespace. animate.X = X # Interval between frames (ms). interval = 100 anim = animation.FuncAnimation(fig, animate, interval=interval, frames=200) plt.show()
Here is a short animation of the forest fire created by this program.Share on Twitter Share on Facebook
Comments are pre-moderated. Please be patient and your comment will appear soon.
Stafford Baines 5 years, 6 months ago
Looks interesting. I'll have to peruse it a bit more to see what makes it tick.Link | Reply
Coincidently I'm working on a vaguely similar idea which I call 'deer park'. N deer are introduced onto a park and move move around the park randomly. The brown deer
eat the green grass which turns black and remains black for n steps after which it 'regrows'. Each deer has a weight index W which can increase to a maximum M if grass is available and decreases otherwise. If W reaches zero the deer is no more. The idea is to see if by varying the various parameters a stable deer population can found. Later additions may include predators and possibly the affect of weather - droughts slowing the rate of growth of the grass.
christian 5 years, 6 months ago
Sounds like an interesting project. Let me know what you find!Link | Reply
Adam Wolniakowski 4 years, 6 months ago
Reminds me of old Wa-Tor simulator designed by A. K. Dewdney: | Reply
He used fish vs. sharks populations.
christian 4 years, 6 months ago
That's really interesting – I had not seen it before. I might see if I can adapt this code to model the Wa-Tor system.Link | Reply
christian 4 years, 6 months ago
Here is my implementation of the Wa-Tor population dynamics model: Thanks for the tip-off about it.Link | Reply
Adam Wolniakowski 4 years, 6 months ago
That's really cool!Link | Reply
If you're into cellular automata, you could also take a look into Wireworld:
I am currently trying to make an interactive compiler to work with the Wireworld computer ()
christian 4 years, 6 months ago
I remember Wireworld from some reading I did a long time ago (Martin Gardner?) – your project sounds interesting (and hard...)Link | Reply
I like your JavaScript programs on the Forest Fire model and Langton's Ant as well, by the way. When I get the time, I might try to implement something in Python for this site as well.
Pedro Henrique Duque 2 years, 2 months ago
Is the code complete? I am trying to run it, but I only get the initial image, without fire. Where do you use the probabilities of new tree growth and lightning strike?Link | Reply
christian 2 years, 2 months ago
The code runs OK under my environment (Python 3.5 / Anaconda). The probabilities p and f are used in the function iterate() which picks them up in global scope. If you let me know what Python version and libraries you have installed, I might be able to help.Link | Reply
Sara 3 months, 1 week ago
I have the same problem , I only get the initial image and I work in python 3 / JupyterLabLink | Reply
christian 3 months, 1 week ago
I can reproduce this in Jupyter Notebook. I do see the animation if I useLink | Reply
%matplotlib notebook
after the imports, though. There's a SO question about this at
Casey 1 year, 3 months ago
Hi Christian. I think I found a bug in your code - the physics looks a little wrong to me. Your "neighborhood" considers any tree next to a tree currently on fire, in any orientation that is a multiple of 45-degrees. This allows your code to spread the fire in subsequent timesteps if fire exists etc. My issue with it is that you are allowing the fire to grow "faster" in the 45-deg, 135-deg, 225-deg, and 315-deg directions. This is because the neighborhood you specify is a square, not a circle. The "edges" of your neighborhood are 1.41ish times closer to the "fire" than your "corners" are if that makes sense. This allows your fire to spread faster in the corner directions (X/Y together) than it does in just X or Y exclusively. I tested this by running your code with f set to 0.00001. At the beginning of the sim, the forest grows to be dense, until one or two lightning strikes occur. Because there are very few sources of fire, it is clear that it grows like a square, when in reality (in a real forest) it would grow like a circle, progressing radially from the strike. So even though the simulation you have in the attached video looks great, the fire actually spreads faster in certain directions than in others.Link | Reply
I'm interested in this problem for a project I'm working on. I'm primarily interested in finding an extremely fast way to run this algorithm once to completion for a pre-defined "forest" with pre-defined fire locations and no random generation of trees or lightning strikes. I would be really interested to hear your thoughts on this. And cool code, thanks for sharing!
christian 1 year, 3 months ago
Hi Casey,Link | Reply
You're right, of course: the orthogonally- and diagonally-adjacent cells are not treated quite correctly. I guess a fix might be to assign a probability < 1 of the fire spreading to a diagonal cell when a given tree is on fire. For example, some factor related to the ratio A/B of A=the overlap of a circle of radius 2r at (0,0) with one of radius r at (0,r) and B=the overlap of this first circle with one of radius r at (r, 1.414r).
In any case, I'm not sure how quantitative this model can really be and how applicable it is to real life (there's no wind, terrain, all the trees are the same, etc.); but I'd be interested to hear how you get on with your project.
Cheers, Christian
New Comment
|
https://scipython.com/blog/the-forest-fire-model/
|
CC-MAIN-2021-39
|
refinedweb
| 1,706
| 63.9
|
Test Driven Development in .NET Part 1: the absolute basics of Red, Green, Refactor
March 25, 2013 3 Comments
In this series of posts we’ll look at ways of introducing Test Driven Development in a .NET project. I’ll assume that you know the benefits of TDD in general and rather wish to proceed with possible implementations in .NET.
The test project
Open Visual Studio 2012 and create a Blank Solution. Right click the solution and select Add… New Project. Add a new C# class library called Application.Domain. You can safely remove the automatically inserted Class1.cs file. You should have a starting point similar to the following:
This Domain project represents the business logic we want to test.
Add another C# class library to the application and call it Application.Domain.Tests. Delete Class1.cs. As we want to test the domain logic we need to add a reference to the Application.Domain project to the Tests project.
Also, we’ll need to include a testing framework in our solution. Our framework of choice is the very popular NUnit. Right-click References in Application.Domain.Tests and select Manage NuGet Packages. Search for ‘nunit’ and then install two following two packages:
The NUnit.Runner will be our test runner, i.e. the programme that runs the tests in the Tests project.
You should end up with the below structure in Visual Studio:
We are now ready to add the first test to our project.
A test is nothing else but a normal C# class with some specific attributes. These attributes declare that a class is used for testing or that a method is a test method that needs to run when we test our logic. Every testing framework will have these attributes and they can be very different but serve the same purpose. In NUnit a test class is declared with the TextFixture attribute and a test method is decorated with the Test attribute. These attributes help the test runner identify where to look for tests. It won’t just run random methods, we need to tell it where to look.
This means that it is perfectly acceptable to have e.g. helper methods within the Tests project. The test runner will not run a method that is not decorated with the Test attribute. You can have as many test methods within a Test project as you wish.
The test framework will also have a special set of keywords dedicated to assertions. After all we want our test methods to tell us whether the test has passed or not. Example: we expect our calculator to return 2 when testing for ‘1+1’ and we can instruct the test method to assert that this is the case. This assertion will then pass or fail and we’ll see the result in the test runner window.
Add a new class to Tests called DomainTestFixture and decorate it with the TestFixture attribute:
[TestFixture] public class DomainTestFixture { }
You will be asked to add a using statement to reference the NUnit.Framework namespace.
A test method is one which doesn’t take any parameters and doesn’t return any values. Add the first test to the test class:
[TestFixture] public class DomainTestFixture { [Test] public void FirstTest() { } }
To introduce an assertion use the Assert object. Type ‘Assert’ within FirstTest followed by a period. IntelliSense will show a whole range of possible assertions: AreEqual, AreNotEqual, Greater, GreaterOrEqual etc. Inspect the available assertions using IntelliSense as you wish. Let’s test a simple math problem as follows:
[Test] public void FirstTest() { int result = 10 - 5; Assert.AreEqual(4, result); }
…where ‘4’ is the expected value of the operation and ‘result’ is the result by some operation. Imagine that ‘result’ comes from a Calculator application and we want to test its subtraction function by passing in 10 and 5. Let’s say that we make a mistake and expect 10 – 5 to be 4. This test should obviously fail.
In order to run the test in the NUnit test runner go to Tools, Extensions and Updates. Click ‘Online’ and the search for NUnit. Install the following package:
You’ll need to restart Visual Studio for the changes to take effect. Then go to Test, Run, All Tests (Ctrl R, A) which will compile the project and run the NUnit tests. You will receive the outcome in the Test Explorer window:
As expected, our test failed miserably. You’ll see that the expected value was 4 but the actual outcome was 5. You’ll also receive some metadata: where the test occurred – FirstTest -, the source – DomainTestFixture.cs and the stacktrace.
Go back and fix the assertion:
Assert.AreEqual(5, result);
Select Run All in the Test Explorer and you’ll see that the red turned green and our test has passed. We can move on to a more realistic scenario and we will follow a test-first approach: we’ll write a test for a bit of code that does not even exist yet. The code to be tested will be generated while writing the test.
Let’s add a new class in Tests called FinanceTests.cs. We’ll pretend that we’re working on a financial application that administers shares. It happens often that you’re not sure what to call your test classes and test methods but don’t worry about them too much. Those are only names that can be changed very easily. Let’s add our first Test:
[TestFixture] public class FinanceTests { [Test] public void SharesTest() { } }
You’ll see that SharesTest sounds extremely general but remember: in the beginning we may not even know exactly what our Domain looks like. We’ll now test the behaviour of a collection of shares. Add the following bit of code to SharesTest:
List<Share> sharesList = new List<Share>();
This won’t compile obviously at first but we can use Visual Studio to create the object for us. Place the cursor on ‘Share’ and press Ctrl +’.’. You’ll see that a small menu pops up underneath ‘Share’. You can select between Generate class and Generate new type. Select Generate new type. Inspect the possible values in each drop-down menu in the Generate New Type window, they should be self-explanatory. Select the following values and press OK:
You’ll see that a file called Share.cs was created in the Domain project. Next add the following to SharesList:
Share shareOne = new Share(); shareOne.Maximum = 100; shareOne.Minimum = 13; sharesList.Add(shareOne);
Again, the code won’t compile first. You can follow the same procedure as with the Share class: place the cursor on ‘Maximum’ and press Ctrl + ‘.’. Select ‘Generate property stub’. Go to Share.cs and you’ll see that an Integer property called Maximum has been added. Do the same with ‘Minimum’. At this point your Share class should look like this:
public class Share { public int Maximum { get; set; } public int Minimum { get; set; } }
You’ll notice that at this point we only added a single Share to our shares list. That’s OK, we’ll start with the simplest possible case. This is always a good idea in TDD: always start with the simplest case which is easy to test and easy to write an assertion for. Example: if you want to test a Calculator you probably won’t start with e + Pi as the first test case but something simpler such as 2 + 3. When your test is complete for the simple cases then you can move on to the more difficult ones.
Next we would like to do something with this Shares collection. Let’s imagine that we’re writing code to group the elements in the collection in some way. So we may write the following code in SharesTest():
Partitioner partitioner = new Partitioner(1); var partition = partitioner.Partition(sharesList);
This is the time to reflect: what name should we give to the class that will group the list elements? What should the method be called? What type of value should it return? I’m a not great fan of the ‘var’ keyword but in this case it comes in handy as I’m not sure what type of object the Partition method should return. The integer we pass in the Partitioner constructor means that we want to group elements by one. Again, we should stop and reflect: does it make sense to allow users to group items by one? Can they pass in 0 or negative values? Or even int.Max? Should we throw an exception then? These are all rules that you will need to consider, possibly with the product owner or the domain expert.
If we allow users to group the items by one then we should probably test for it. Add the following assertion:
Assert.AreEqual(1, partition.Size);
…meaning that if we instruct the Partitioner to create groups of one then the size of the resulting partition should be 1. Now I have also decided that the Partition() method should return a… …Partition! Update the relevant line as follows:
Partition partition = partitioner.Partition(sharesList);
Using the technique we used before create the Partitioner and Partition classes, the Partition method stub and the Size property stub. Don’t worry about the implementations yet. Make sure that you select the Domain project when creating the classes. The Partitioner class should look as follows:
public class Partitioner { private int p; public Partitioner(int p) { // TODO: Complete member initialization this.p = p; } public Partition Partition(List<Share> sharesList) { throw new NotImplementedException(); } }
Partition.cs:
public class Partition { public object Size { get; set; } }
Overwrite the type of the Size property to ‘int’.
At this point the projects should compile just fine. Run the test by pressing Ctrl R, A and see what happens. You will of course see that our SharesTest has failed:
We have not implemented the Partition method yet, so we obviously cannot have a passing test.
This is exactly the first thing that we wanted to happen: a failing test.
The Red – Green – Refactor cycle
The Red – Green – Refactor cycle is a fundamental one in TDD. We’re at the Red stage at present as we have a failing test which correspond to Step 1: Create a failing test. You may wonder why this is necessary: a failing test makes sure that our method under test is testable. It is important to see that it can fail. If a method never ever can fail then it is not testable. Therefore make sure that you follow this first step in your test creation. The first step involves other important things we have to consider: name of the classes, name of methods, parameter types, return types, business rules etc. These are all very important considerations that you need to take into during this first step.
Step 2, i.e. Green involves a minimalistic implementation of our method stub(s): write just enough to make the test pass, i.e. replace the red light with green. Do not write the complete implementation of the method just yet, that will happen in different stages.
Step 3 is Refactoring, which is the gradual implementation of the method under test. This is a gradual process where you extend the implementation of the method without changing its external behaviour, i.e. the signature, and run the test over and over again to make sure that the method still fulfills the assertion. Did the change break the tests? Or do the tests still pass? You can come back to your code a year later and still have the tests in place. They will tell you immediately if you’ve broken something.
You may think that all this is only some kind of funny game to produce extremely simple code. We all know that real life code is a lot more complicated: ask a database, run a file search, contact a web service etc. How can those be tested? Is TDD only meant for the easy stuff in memory? No, TDD can be used to test virtually anything – as long as the code is testable. If you follow test first development then testability is more or less guaranteed. There are ways to remove those external dependencies such as Services, Repositories, web service calls etc. and test the REAL purpose of the method. The real purpose of a method is rarely just to open a file – it probably needs to read some data and analyse it.
If, however, you write lengthy implementations at first and then write the tests then testability is at risk. It’s easy to fall into traps that make testability difficult: dependencies, multiple tasks within a method – violating the Single Responsibility Principle, side effects etc. can all inadvertently creep in.
We’ll stop here at the Red phase of the TDD cycle – the next post will look the Green and Refactor.
Cool post. In my opinion the test-first TDD Style is not as effective in compiled languages as compared to dynamic due to the fact the code need to be compiled first – whereas in dynamic languages it is enough to save the file and run the tests instantly.
Thanks for your comments. I’m not familiar with dynamic languages, so I have to trust you on that one. Embracing TDD is beneficial in all programming scenarios – the price of having to compile the code is very low compared to angry customers finding your bugs before you.
If your build times are long that is usually a smell that you aren’t managing your dependencies properly, or that your local build script is doing too much at once. You don’t need a solution with 80 projects in it, you don’t need Code Analysis to run on every single local build, etc. A painful RRR cycle because of 20+ second build times is an opportunity to improve your dependency management and tune up your build.
|
https://dotnetcodr.com/2013/03/25/test-driven-development-in-net-part-1-the-absolute-basics-of-red-green-refactor/?replytocom=149
|
CC-MAIN-2022-33
|
refinedweb
| 2,321
| 73.47
|
I have some code that is working fine in my desktop application. I would like to do the same thing in windows mobile application (Motorola MC9090), but I get error:
"Type 'Microsoft.Win32.SafeHandles.SafeFileHandle' is not defined."
I tried to add Namespace: Microsoft.Win32.SafeHandles but then it says:
"Namespace or type specified in the Imports 'Microsoft.Win32.SafeHandles' doesn't contain any public member or cannot be found. Make sure the namespace or the type is defined and contains at least one public member. Make sure the imported element name doesn't use any aliases."
and the problem remains.
I tried to add every possible reference but there's still no SafeHandles...
Please help...
|
http://forums.codeguru.com/printthread.php?t=525545&pp=15&page=1
|
CC-MAIN-2014-52
|
refinedweb
| 116
| 60.72
|
US7657581B2 - Metadata management for fixed content distributed data storage - Google PatentsMetadata management for fixed content distributed data storage Download PDF
Info
- Publication number
- US7657581B2US7657581B2 US11190402 US19040205A US7657581B2 US 7657581 B2 US7657581 B2 US 7657581B2 US 11190402 US11190402 US 11190402 US 19040205 A US19040205 A US 19040205A US 7657581 B2 US7657581 B2 US 7657581B2
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- region
- backup
- node
- copy
- authoritative
- application is based on and claims priority from application Ser. No. 60/592,075, filed Jul. 29, 2004.
This application is related to application Ser. No. 10/974,443, filed Oct. 27, 2004, titled “Policy-Based Management of a Redundant Array of Independent Nodes.”.
Prior art archival storage systems typically store metadata for each file as well as its content. Metadata is a component of data that describes the data. Metadata typically describes the content, quality, condition, and other characteristics of the actual data being stored in the system. In the context of distributed storage, metadata about a file includes, for example, the name of the file, where pieces of the file are stored, the file's creation date, retention data, and the like. While reliable file storage is necessary to achieve storage system reliability and availability of files, the integrity of metadata also is an important part of the system. In the prior art, however, it has not been possible to distribute metadata across a distributed system of potentially unreliable nodes. The present invention addresses this need in the art.
An archival storage cluster of preferably symmetric nodes includes a metadata management system that organizes and provides access to given metadata, preferably in the form of metadata objects. Each metadata object may have a unique name, and metadata objects are organized into regions. Preferably, a region is selected by hashing one or more object attributes (e.g., the object's name) and extracting a given number of bits of the resulting hash value. The number of bits may be controlled by a configuration parameter. Each region is stored redundantly. A region comprises a set of region copies. In particular, there is one authoritative copy of the region, and zero or more backup copies. The number of backup copies may be controlled by a configuration parameter, which is sometimes referred to herein as a number of “tolerable points of failure” (TPOF). Thus, in a representative embodiment, a region comprises an authoritative region copy and its TPOF backup copies. Region copies are distributed across the nodes of the cluster so as to balance the number of authoritative region copies per node, as well as the number of total region copies per node.
According to a feature of the present invention, a region “map” identifies the node responsible for each copy of each region. The region map is accessible by the processes that comprise the metadata management system. A region in the region map represents a set of hash values, and the set of all regions covers all possible hash values. As noted above, the regions are identified by a number, which is derived by extracting a number of bits of a hash value. A namespace partitioning scheme is used to define the regions in the region map and to control ownership of a given region. This partitioning scheme preferably is implemented in a database.
A region copy has one of three states: “authoritative,” “backup” and “incomplete.” If the region copy is authoritative, all requests to the region go to this copy, and there is one authoritative copy for each region. If the region copy is a backup (or an incomplete), the copy receives update requests (from an authoritative region manager process). A region copy is incomplete if metadata is being loaded but the copy is not yet synchronized (typically, with respect to the authoritative region copy). An incomplete region copy is not eligible for promotion to another state until synchronization is complete, at which point the copy becomes a backup copy.
According to the invention, a backup region copy is kept synchronized with the authoritative region copy. Synchronization is guaranteed by enforcing a protocol or “contract” between an authoritative region copy and its TPOF backup copies when an update request is being processed. For example, after committing an update locally, the authoritative region manager process issues an update request to each of its TPOF backup copies (which, typically, are located on other nodes). Upon receipt of the update request, in this usual course, a region manager process associated with a given backup copy issues, or attempts to issue, an acknowledgement. The acknowledgement does not depend on whether the process has written the update to its local database. The authoritative region manager process waits for acknowledgements from all of the TPOF backup copies before providing an indication that the update has been successful. There are several ways, however, in which this update process can fail, e.g.,. If the backup region manager cannot process the update, it removes itself from service. If either the backup region manager process or the authoritative manager process die, a new region map is issued. By ensuring synchronization in this manner, each backup copy is a “hot standby” for the authoritative copy. Such a backup copy is eligible for promotion to being the authoritative copy, which may be needed if the authoritative region copy is lost, or because load balancing requirements dictate that the current authoritative region copy should be demoted (and some backup region copy promoted).
This design ensures high availability of the metadata even upon a number of simultaneous node failures. present invention preferably is implemented in a scalable disk-based archival storage management system, preferably a system architecture based on a redundant array of independent nodes.. In one illustrative embodiment, the software stack (which may include the operating system) on each node is symmetric, whereas the hardware may be heterogeneous. Using the system, as illustrated in
As described in co-pending application Ser. No. 10/974,443, a distributed software application executed on each node captures, preserves, manages, and retrieves digital assets. In an illustrated embodiment of
In an illustrated embodiment, the ArC application instance executes on a base operating system 336, such as Red Hat Linux 9.0. The communications middleware is any convenient distributed communication mechanism..
As noted above, internal files preferably are the “chunks” of data representing a portion of the original “file” in the archive object, and preferably they are placed on different nodes.
Metadata Management
According to the present invention,:
- ExternalFile: a file as perceived by a user of the archive;
- InternalFile: a file stored by the Storage Manager; typically, there may be a one-to-many relationship between External Files and Internal Files.
- ConfigObject: a name/value pair used to configure the cluster;
- AdminLogEntry: a message to be displayed on the adminstrator UI;
- MetricsObject: a timestamped key/value pair, representing some measurement of the archive (e.g. number of files) at a point in time; and
- PolicyState: a violation of some policy.
Of course, the above objects are merely illustrative and should not be taken to limit the scope of the present invention.
Each metadata object may have a unique name that preferably never changes. According to the invention, metadata objects are organized into regions. A region comprises an authoritative region copy and a, as will be described in more detail below.
Each region may be stored redundantly. As noted above, there is one authoritative copy of the region, and zero or more backup copies. The number of backup copies is controlled by the metadata TPOF (or “TPOF”) configuration parameter, as has been described. Preferably, region copies are distributed across all the nodes of the cluster so as to balance the number of authoritative region copies per node, and to balance the number of total region copies per node..
As illustrated in.
A backup region copy is kept synchronized with the authoritative region copy by enforcing a given protocol (or “contract”) between an authoritative region copy and its TPOF backup copies. This protocol is now described. die, a new region map is issued.
To prove that synchronization is maintained, several potential failure scenarios are now explained in more detail. In a first scenario, assume that each backup RGM, after acknowledging the update request, successfully carries out the request locally in its associated database. In this case, the authoritative and backup schemas are in sync. In a second scenario, assume that the authoritative RGM encounters an exception (e.g., a Java IOException) from a backup RGM. This means that the backup RGM may have failed. In such case, the authoritative RGM requests that the MM leader send out a new map, or the MM leader, noticing that the backup node has failed, initiates creation of a new map on its own. (A “new map” may also be simply an updated version of a current map, of course). As part of this process, the interrupted update, which is still available from the authoritative RGM, will be applied to remaining backup region copies, and to a new incomplete region copy. In a third scenario, assume that the backup RGM encounters an exception while acknowledging the backup request to the authoritative RGM. This means that the authoritative RGM may have failed. In this case, because it noticed the failure of the node containing the authoritative region copy, the MM leader sends out a new map. If the update was committed by any backup RGM, then that update will be made available to all region copies when the new map is distributed. This may result in a false negative, as the update is reported to the caller as a failure, yet the update actually succeeded (which is acceptable behavior, however). If the update has not been committed by any backup RGM, then the update is lost. The update is reported to the caller as a failure. In a fourth scenario, assume a backup RGM fails to process the backup request after acknowledging receipt. In this case, as noted above, the backup RGM shuts itself down when this occurs. To guarantee this, a shutdown is implemented upon the occurrence of an unexpected event (e.g., Java SQLException, or the like). This ensures that a backup region goes out of service when it cannot guarantee synchronization. In such case, the normal map reorganization process creates a new, up-to-date backup copy of the region on another node. The update has been committed in at least the authoritative region copy, so that the new backup region copy will be synchronized with the authoritative region copy. In a fifth scenario, assume that the authoritative RGM crashes even before it perform the local commit. In such case, of course, there is no metadata update and the request fails.
The above scenarios, which are not exhaustive, illustrate how the present invention guarantees synchronization between the authoritative region copy and its TPOF backup copies.
As has been described, the region map describes the ownership of each copy of each region. For example,
While this approach may be used, it requires every region to be split at the same time. A better technique is to split regions incrementally. To do this, the namespace partitioning scheme splits regions in order, starting at region 0 and ending at the last region of the current level. A region is split by using one more bit of the hash value.
There is no requirement that the number of regions correspond to the number of nodes. More generally, the number of regions is uncorrelated with the number of nodes in the array of independent nodes.
Thus, according to one embodiment, control over regions is accomplished by assigning metadata objects to regions and then splitting regions incrementally. The region copies (whether authoritative, backup or incomplete) are stored in the database on each node. As has been described, metadata operations are carried out by authoritative RGMs. When a node fails, however, some number of region copies will be lost. As has been described, availability is restored by promoting one of the backup copies of the region to be authoritative, which can usually be done in a few seconds. During the short interval in which the backup is promoted, requests submitted by an MMC to the region will fail. This failure shows up as an exception caught by the MMC, which, after a delay, causes a retry. By the time the request is retried, however, an updated map should be in place, resulting in uninterrupted service to MMC users. As has been described, this approach relies on copies (preferably all of them) of a region staying synchronized.
Thus,. By contrast, in a general-purpose distributed database, different updates may occur at different sites, and it is possible for some update sites, but not others, to run into problems requiring rollback. In the present invention, within a copy of a region, requests preferably are executed in the same order as in all other copies, one at a time. It is not necessary to abort a transaction, e.g., due to deadlock or due to an optimistic locking failure. Typically, the only reason for request execution to fail is a failure of the node, e.g. a disk crash, the database runs out of space, or the like. The metadata management system, however, ensures that any such failure (whether at the node level, the region manager level or the like) causes reassignment of region copies on the failed node; thus, the integrity of the remaining region copies is guaranteed. As will be described in more detail below, according to the invention,).
The following section provides additional detail on maintaining backup regions according to the present invention.
As already noted, the backup scheme relies on one or more (and preferably all) of the backup copies of a region staying synchronized such that each backup copy is a “hot standby.” Backup regions are maintained as follows. A metadata object is created or modified by sending a request to an authoritative RGM. Request execution typically proceeds as follows:
- update local database
- commit database update
- for each backup region manager:
- send backup request to backup region manager
- wait for acknowledgement of backup request
- return control to caller
There is no timeout specified for the backup request. An exception from a backup RGM indicates that the remote node has failed. The administrative engine notices this exception and informs the MM leader of the failure. This causes a new incomplete region copy to be created elsewhere. A new region map describing that incomplete region copy is then distributed. The authoritative RGM, therefore, can ignore the exception.
The receiver of a backup request acknowledges the request and then applies the requested updates to its local database. A last received backup request is kept in memory for use in recovering a backup region. Only the last request is needed, so when a new backup request is received and committed, the previous one may be discarded.
For a backup region copy to be used as a hot standby, it must be kept synchronized with the authoritative region copy. As has been described, the scheme provides a way to synchronize with the most recent update before any promotion of a region copy (from backup to authoritative). Thus, after acknowledging receipt of a backup request (if it can), the backup RGM either commits the update to the local database or removes itself from service. In an illustrative embodiment, the backup RGM can remove itself from service by bringing down a given process, such as a JVM, or by bringing down just the region. Thus, according to the scheme, if a backup RGM exists, it is synchronized with the authoritative RGM.
The following provides additional implementation details of the metadata management system of the present invention.
Intra- and inter-node communications may be based on a one-way request pattern, an acknowledged request pattern, or a request/response pattern. In a one-way request pattern, a request is sent to one or multiple receivers. Each receiver executes the request. The sender does not expect an acknowledgement or response. In an acknowledged request pattern, a request is sent to one or more receivers. Each receiver acknowledges receipt and then executes the request. In a request/response pattern, a request is sent to one or more receivers. Each executes the request and sends a response to the sender. The responses are combined, yielding an object summarizing request execution. The acknowledged request pattern is used to guarantee that backup region copies are correct. These communication patterns are used for various component interactions between the MMC and RGM, between RGMS, between MMs, and between system components and an MMC.
As mentioned above, the MM leader creates a region map when a node leaves the cluster, when a node joins the cluster, or when an incomplete region copy completes loading. In the first case, when a node leaves a cluster, either temporarily or permanently, the regions managed by the MM on that node have to be reassigned. The second case involves the situation when a node returns to service, or when a node joins the cluster for the first time; in such case, regions are assigned to it to lighten the load for the other MMs in the cluster. All the regions created on the new node are incomplete. These regions are promoted to be backups once they have finished loading data. The third situation occurs when an incomplete region completes loading its data. At this time, the region becomes a backup. A map creation algorithm preferably ensures that a given node never contains more than one copy of any region, that authoritative regions are balanced across the cluster, and that all regions are balanced across the cluster. The latter two constraints are necessary, as all RGMs process every metadata update and thus should be spread across the cluster. Authoritative RGMs also process retrieval requests, so they should also be well-distributed.
The following provides additional details regarding a map creation algorithm.
When a MM leader needs to create a new map, the first thing it does is a region census. This is done using the request/response message pattern, sending the request to the MM on each node currently in the cluster. The request/response pattern preferably includes an aggregation step in which all responses are combined, forming a complete picture of what regions exist in the archive. The information provided by the region census preferably includes the following, for each region copy: the node owning the region copy, the last update processed by the region manager (if any), and the region timestamp stored in the region's database schema. The region timestamps are used to identify obsolete regions, which are deleted from the census. This guarantees that obsolete regions will be left out of the map being formed, and also that the obsolete region schemas will be deleted. In most cases, an obsolete region copy will have a lower map version number than the map number from a current region copy. This may not always be the case, however. Assume, for example, that a new map is being created due to a node crash. The region census discovers the remaining regions and forms a new map. If the failed node restarts in time to respond to the region census, the node will report its regions as if nothing had gone wrong. However, these regions may all be out of date due to updates missed while the node was down. The solution to this problem is to examine the region timestamps included with the region census. Each region copy reports its region timestamp, which represents the timestamp of the last update processed. Suppose the maximum timestamp for a region is (v, u). Because region copies are kept synchronized, valid timestamps are (v, u) and (v, u−1). This identifies obsolete regions, whether the failed region has a current or obsolete map version number. There is no danger that a node will fail, return to service quickly, and then start processing requests based on obsolete regions. The reason for this is that the node will not have a region map on reboot, and RGMs do not exist until the map is received. Requests from an MMC cannot be processed until RGMs are created. So a failed node, which restarts quickly, cannot process requests until it gets a new map, and the new map will cause the node to discard its old regions.
After the region census, an initial region map is generated as follows. If the region census turns up no regions at all, then the cluster must be starting for the first time. In this case, authoritative region owners are assigned first. For each assignment, the algorithm selects a least busy node. The least busy node is the node with the fewest region copies. Ties are resolved based on the number of authoritative copies owned. After authoritative region owners are assigned, backup region owners are assigned, striving to balanced authoritative and total region ownership. The new map is sent to all MMs, which then create the regions described by the map.
Once the cluster has started, map changes preferably are implemented by doing the following map transformations, in order: (1) if a region does not have an authoritative copy (due to a node failure), promote a backup; (2) if a region has more than TPOF backups, delete excess backups; (3) if a region has fewer than TPOF backups, (due to a node failure, or due to a promotion to authoritative), create a new incomplete region copy; (4) rebalance ownership; and (5) rebalance authoritative ownership. Step (4) involves finding the busiest node and reassigning one of its regions to a node whose ownership count is at least two lower. (If the target node's ownership count is one lower, then the reassignment does not help balance the workload.) Preferably, this is done by creating a new incomplete region. This operation is continued as long as it keeps reducing the maximum number of regions owned by any node. Step (5) involves finding the node owning the largest number of authoritative regions, and finding a backup whose authoritative ownership count is at least two lower. This step swaps responsibilities, e.g., by promoting the backup and demoting the authoritative. This operation is continued as long as it keeps reducing the maximum number of authoritative regions owned by any node.
When a node leaves the cluster, then steps (1) and (3) fill any gaps in the region map left by the node's departure. Steps (4) and (5) are then used to even out the workload, if necessary.
When a node joins the cluster, steps (1)-(3) do not change anything. Step (4), in contrast, results in a set of incomplete regions being assigned to the new node. When an incomplete region completes loading its data, it notifies the MM leader. The map promotes the incomplete region to a backup. Step (5) then has the effect of assigning authoritative regions to the new node.
When an incomplete region finishes its synchronization, it converts to a backup region and informs the MM leader. The MM leader then issues a new map, containing more than TPOF backups for at least one region. Step (2) deletes excess backup regions, opting to lighten the burden on the most heavily loaded MMs.
When a MM receives a new map, it needs to compare the new map to the current one, and for each region managed by the MM, apply any changes. The possible changes are as follows: delete a region, create a region, promote a backup region to authoritative, promote an incomplete region to backup, and demote an authoritative region to backup. Regarding the first type of change, load balancing can move control of a region copy from one node to another, resulting in deletion of a copy. In such case, the network and database resources are returned, including the deletion of the schema storing the region's data. The second type of change, creating a region, typically occurs in a new cluster as authoritative and backup regions are created. Thereafter, only incomplete regions are created. Region creation involves creating a database schema containing a table for each type of metadata object. Each region's schema contains information identifying the role of the region (authoritative, backup or incomplete). The third type of change, promotion from backup to authoritative, requires modification of the region's role. The other change types, as their names imply, involve changing the region's role from incomplete to backup, or from authoritative to backup.
An incomplete region starts out with no data. As noted above, it is promoted to a backup region when it is synchronized with the other copies of the region. This has to be done carefully because the region is being updated during this synchronization process. A fast way of loading large quantities of data into a Postgres database is to drop all indexes and triggers, and then load data using a COPY command. In a representative embodiment, one complete procedure is as follows: (1) create an empty schema; (2) for each table, use two COPY commands, connected by a pipe; the first COPY extracts data from a remote authoritative region, the second one loads the data into the local incomplete region; (3) add triggers (to maintain external file metrics); and (4) add indexes. Like a backup region, an incomplete region is responsible for processing backup requests. A backup region implements these requests by updating the database. An incomplete region cannot do this due to the lack of triggers and indexes. Instead, the backup requests are recorded in the database. Once the data has been loaded and the triggers and indexes have been restored, the accumulated update requests are processed. More updates may arrive as the update requests are being processed; these requests are en-queued and are processed also. At a given point, incoming requests are blocked, the queue is emptied, and the region switches over to processing backup requests as they come in. Once this switch occurs, the region announces to the MM leader that it can be promoted to a backup region.
Some interactions between MM components have to be carefully synchronized as will now be described.
A map update must not run concurrently with request execution as it can lead to a temporarily incorrect view of the metadata. For example, suppose an update request arrives at an RGM just as the RGM is being demoted from authoritative to backup. The request could begin executing when the demotion occurs. There will be a local update, and then backup requests will be issued. The RGM, however, will receive its own backup request (which is incorrect behavior), and the new authoritative region will receive the backup request. Meanwhile, a request for the object could go to the new authoritative region before the backup request had been processed, resulting in an incorrect search result. As another example, when an incomplete region is loading its data, backup requests are saved in a queue in the database. When the load is complete, the en-queued requests are processed. Once they have all been processed, update requests are processed as they are received. The switch from executing accumulated requests to executing requests as they arrive must be done atomically. Otherwise, updates could be lost. These problems are avoided by creating a lock for each RGM, and preferably the execution of each request by an RGM is protected by obtaining the RGM's lock.
The present invention provides numerous advantages. Each metadata manager of a node controls a given portion of the metadata for the overall cluster. Thus, the metadata stored in a given node comprises a part of a distributed database (of metadata), with the database being theoretically distributed evenly among all (or a given subset of) nodes in the cluster. The metadata managers cooperate to achieve this function, as has been described. When new nodes are added to the cluster, individual node responsibilities are adjusted to the new capacity; this includes redistributing metadata across all nodes so that new members assume an equal share. Conversely, when a node fails or is removed from the cluster, other node metadata managers compensate for the reduced capacity by assuming a greater share. To prevent data loss, metadata information preferably is replicated across multiple nodes, where each node is directly responsible for managing some percentage of all cluster metadata, and copies this data to a set number of other nodes.
When a new map is generated, the MM leader initiates a distribution of that map to the other nodes and requests suspension of processing until all nodes have it. Ordinary processing is resumed once the system confirms that all of the nodes have the new map.
The present invention facilitates the provision of an archive management solution that is designed to capture, preserve, manage, and retrieve digital assets. The design addresses numerous requirements: unlimited storage, high reliability, self-management, regulatory compliance, hardware independence, and ease of integration with existing applications. Each of these requirements is elaborated below. described our invention, what we now claim as follows.
|
https://patents.google.com/patent/US7657581B2/en
|
CC-MAIN-2018-30
|
refinedweb
| 4,850
| 52.39
|
PlaySoundAtInterval
From Unify Community Wiki
(Difference between revisions)
Latest revision as of 20:45, 10 January 2012
Author: Will Preston (Sigma)
[edit] Description
This script allows for a sound to be played at intervals specified by the user. NOTE: The script does not take into account the time length of the sound; likewise, if you wanted to play an 11-second sound every 60 seconds, you would set the interval to be 71 (60sec + 11sec).
[edit] Usage
Place this script in a C# file named PlaySoundAtInterval.cs, and drag it onto the object with the AudioSource you wish for it to affect. The audio source does not have to have any sound loaded into it, this script handles that as well. You may encounter problems if the AudioSource has looped playback and/or Play On Awake enabled, so it is recommended that you disable those options for the AudioSource in question.
[edit] C# - PlaySoundAtInterval.cs
using UnityEngine; using System.Collections; // PlaySoundAtInterval.cs // Copyright (c) 2010-2011 Sigma-Tau Productions (). // This script is free to be used in both free and commercial projects as long as this // notice is retained. [RequireComponent (typeof (AudioSource))] public class PlaySoundAtInterval : MonoBehaviour { // Public variables // Will the sound play on startup? public bool playAtStartup = false; // The interval of time (in seconds) that the sound will be played. public float interval = 3.0f; // The sound itself. public AudioClip clipToPlay; // Private variables // A modifier that will prevent the script from running in the event of an error private bool disableScript = false; // The amount of time that has passed since the last initial playback of the sound. private float trackedTime = 0.0f; // Tracks to see if we've played this at startup. private bool playedAtStartup = false; // Use this for initialization void Start () { if (interval < 1.0f) { // Make sure the interval isn't 0, or we'll be constantly playing the sound! Debug.LogError("Interval base must be at least 1.0!"); disableScript = true; } } // Update is called once per frame void Update () { if (!disableScript) { // Play the sound when the scene starts if (playAtStartup && !playedAtStartup) { audio.PlayOneShot(clipToPlay); playedAtStartup = true; } // Increment the timer trackedTime += Time.deltaTime; // Check to see that the proper amount of time has passed if (trackedTime >= interval) { // Play the sound, reset the timer audio.PlayOneShot(clipToPlay); trackedTime = 0.0f; } } } }
|
http://wiki.unity3d.com/index.php?title=PlaySoundAtInterval&diff=prev&oldid=13050
|
CC-MAIN-2019-26
|
refinedweb
| 380
| 65.52
|
Talk:Main Page/Archive23
[edit] Recaptcha
I like this.. who's idea was it? CPWebmaster 01:07, 1 February 2009 (EST)
- Subject, you sentence lacks one. - User
01:09, 1 February 2009 (EST)
- I is subject. He lacks a direct object in a form other than pronoun. DickTurpis 01:14, 1 February 2009 (EST)
- What is Recaptcha? - User
01:17, 1 February 2009 (EST)
- The intended subject was recaptcha. Recaptcha is that thing that makes you type words to make an account. As a matter of fact, I liked it so much I just added it to CP. CPWebmaster 01:19, 1 February 2009 (EST)
- Okay. How does that affect the Main page? - User
01:25, 1 February 2009 (EST)
- It affects EVERYTHING! 67.242.66.251 02:01, 1 February 2009 (EST)
- And: adding Recaptcha totally BORKED the site, did it? Har de har!!!!!
[edit] Sockpuppet
I notice a number of people putting "sockpuppet" or "vandal" notices on otherwise virgin user pages. As this has the effect of de-redding the special pages entries and rendering them less noticeable, I believe that it is counter-productive and would suggest that the practice be discontinued.
(Toast) and marmalade 14:47, 2 February 2009 (EST)
I suggest to vaporize any examples of such.--Ipatrol 10:34, 17 February 2009 (EST)
- I agree. It's usually the {{vcat}} template, which is redundant since the vandal category was deleted. Vandals with redlinked names are much easier to spot in RC. Wèàšèìòìď
Methinks it is a Weasel 22:39, 18 February 2009 (EST)
[edit] RW should have the best creationism bullshit on the web2.0
RW should have a thorough and detailed account of all that ar cretard (with thorough and detailed explanations wikilinked throughout). Evolution would make Ken nut, but Evolution (no jokes) would set homskollars straight (e.g. "There are no transitional fossils").
Places like TalkOrigins are on to something, but the BS needs to be displayed in an uninterrupted form that cretards are used to. It would illustrate to people of irrational persuasions that not only do we completely understand their arguments, we can even portray them clearer and more aesthetically. They need to think "this is better than CP/ICR/DrDino and it makes sense as long as I don't read anything that is readily available". With hour powers of rationality, we could easily created the best creationist argument on the google.
I don't think it would be very hard to steal CP's correctness, especially in light of their recent bout with technology. Any interest in such a project? Neveruse513 00:55, 3 February 2009 (EST)
- No, not really, but thanks for trying.... ħuman
01:47, 3 February 2009 (EST)
- Neveruse513 if you want to set homeschoolers straight about evolution, there is still the Conservapedia:Evolution side-by-side that has been left hanging for about 2 months. - User
20:09, 3 February 2009 (EST)
[edit] HELP!!!
Would people please stop crashing the wiki? --"ConservapediaUndergroundDiodecan't spell criticism (yes I can!) 19:04, 3 February 2009 (EST)
- What does this have to do with the main page? - User
19:48, 3 February 2009 (EST)
- I would like to point out that the main page is the first thing you see once the wiki finally comes back up after an hour of being in limbo. . . --"ConservapediaUndergroundDiodecan't spell criticism (yes I can!) 19:49, 3 February 2009 (EST)
- So you just post on the first thing you see do you? That explains a lot. - User
20:06, 3 February 2009 (EST)
- Oh, and the crashing affects the entire wiki. So it does belong on the main page. Where- oh wait! Sorry, I didn't see the tech support. Moving there. --"ConservapediaUndergroundDiodecan't spell criticism (yes I can!) 20:07, 3 February 2009 (EST)
What the hell crashing are you prattling on about--it's been working fine for me all day. TheoryOfPractice 20:11, 3 February 2009 (EST)
- It was offline for a few minutes or so. My fault, TMT sorted it out. Wèàšèìòìď
Methinks it is a Weasel 20:13, 3 February 2009 (EST)
(EC) Is the Main Page talk now literally just for talk about the Main Page itself? Wèàšèìòìď
Methinks it is a Weasel 20:13, 3 February 2009 (EST)
- Well tech problems go to tech support, what is happening in the world can go to WIGO world and just random prattling can be taken to the saloon. I suppose mainpage and major announcements go here. - User
20:15, 3 February 2009 (EST)
- The main page talk should be about talk about the wiki- which tech support can fall under. It can go either place. --"ConservapediaUndergroundDiodecan't spell criticism (yes I can!) 20:16, 3 February 2009 (EST)
- No it is for discussing the main page and other major things not what ever random idea pops into your head today. - User
20:18, 3 February 2009 (EST)
- And the wiki crashing is random how? And weasel, it was gone for at least 45 minutes. And this is the second time its happened. --"ConservapediaUndergroundDiodecan't spell criticism (yes I can!) 20:21, 3 February 2009 (EST)
- Trent is aware of the problem and is going to fix it. Now run along and find something else to do. - User
20:25, 3 February 2009 (EST)
- From a practical standpoint, CUR, posts on tech support will probably be seen by Trent before they would on other pages. This is more to make Trent's life easier than anything. CorryI'll be in the hospital bar. 20:34, 3 February 2009 (EST)
(unident) As said above: I didn't see the tech support desk. Leave me alone. Rusty-spotted catspeed. --"ConservapediaUndergroundDiodecan't spell criticism (yes I can!) 20:35, 3 February 2009 (EST)
- No need to take offense, I'm just describing the rationale. CorryI'll be in the hospital bar. 20:41, 3 February 2009 (EST)
[edit] Anti-Intellectualism
Does anyone else think that an article about anti-intellectualism would be welcome, rather than the redirect to George W. Bush that's in place now? Given that this is the sort of thing RationalWiki was made to oppose, I think an article about it makes sense. --Anonymous
- Most definitely. We should also have an extensive see also section: George W. Bush, Dick Cheney, ASchlafly, Karajou, TK, etc. etc. --"ConservapediaUndergroundDiodecan't spell criticism (yes I can!) 21:13, 3 February 2009 (EST)
[edit] Login Warning
Perhaps some sort of warning on the main page telling users to log in before they post would be wise? Remember - Big Brother Teacake is watching you! EddyP 14:20, 4 February 2009 (EST)
- Beware of TK. Wahahahahah! -User:TK/sig
- Edit this page. That's the warning above the edit box when you're not logged in. -- Nx talk 14:43, 4 February 2009 (EST)
[edit] New Meta-category(gories?)
- This discussion was moved to Saloon bar.
[edit] Antivax bombshell
Moved to Saloon Bar. Come have a drink.
[edit] Thread that's actually about the main page shock
There's something about the random featured articles that's been bugging me for a while. I don't like the way it cuts off in mid-article at a random point (with one it cuts off during a section title and looks really scrappy). What I propose is that for every cover story, a template featuring a precis of the article appears here, with a link to the main article. I'm quite happy to do the grunt work of all this. What say you? Totnesmartin 15:37, 13 February 2009 (EST)
- Sure, sounds good. Right now it just grabs the first "x" (I think it's 800) characters, and we noinclude templates and images - although those characters still get counted. One way around this that would keep things "dynamic" (ie, if article improves, so does "snippet") would be to drop the character thing and transclude the entire article - but to put "includeonly" tags around the appropriate chunk of text. What do you think of that? The tags could go in first, so it wouldn't break the main page while we were doing it. ħuman
17:04, 13 February 2009 (EST)
- Sounds good to me. Wèàšèìòìď
Methinks it is a Weasel 17:13, 13 February 2009 (EST)
- that would be easier. Each article has a general intro, and we just transclude that to the main page. Let's do it that way then. I might have to use my bot account to fiddle with some things if that's ok with everyone. Totnesmartin 17:24, 13 February 2009 (EST)
I've increased the grab from 750 to 1000 characters - it looks OK on my steam 800x600 monitor, but I won't be doing any more tonight, my connection is being a twat again. I'll pick this up in the morning. Totnesmartin 17:47, 13 February 2009 (EST)
- Ok, what are these "includeonly" tags? <includeonly> shows up in the text, so it isn't that. I'll try noincludes around the non-intro text. Totnesmartin 06:50, 15 February 2009 (EST)
- And using noinclude tags doesn't work either the <noinclude> tag shows on the manin page as does the text within in. I'm tempted to go back to the template idea. Totnesmartin 07:04, 15 February 2009 (EST)
- It's the opposite of noinclude, everything enclosed in includeonly will only appear when the template is included, not on the template page. For example: Template:User VoteHope uses includeonly to cat the user page, but not the template page. -- Nx talk 07:10, 15 February 2009 (EST)
- And what, precisely, do I have to type? Totnesmartin 07:46, 15 February 2009 (EST)
- It doesn't seem to work, I've tried noinclude. Includeonly would hide the text from the article itself and it wouldn't prevent the rest from showing on the main page, so it's useless. I'm looking for a different solution. It's possible to only include the lead section, but some articles, for example Behe:The Edge of Evolution, Interview, don't have one. -- Nx talk 07:54, 15 February 2009 (EST)
- I'll go with the templates idea then. Totnesmartin 08:00, 15 February 2009 (EST)
- I've got an idea. Give me a few minutes to test it. -- Nx talk 08:03, 15 February 2009 (EST)
- Fire away - I've got distracted by my Smash Hits book again... Totnesmartin 08:11, 15 February 2009 (EST)
Ok, here's how it goes: you create a new section at the end of the cover story article called cover. Write the stuff that should appear on the main page under that, and only that section will be included. Then includeonly the cover section so it doesn't appear in the article itself. See User:Nx/sandbox and User:Nx/sandbox2. Although if you're going to do this, it might be a better idea to simply write a lead section for those articles that don't have one. -- Nx talk 08:15, 15 February 2009 (EST)
- Even better: For articles that have a lead section,
simply add <includeonly>==cover==</includeonly> directly above the lead.the lead section will be included. For articles that don't, add <includeonly>==cover== stuff that should appear on main page </includeonly> directly above the first section. See User:Nx/sandbox, User:Nx/sandbox2 and User:Nx/sandbox3 -- Nx talk 08:38, 15 February 2009 (EST)
- Argh, you guys are right, I forgot how includeonly works again. Surely there is a tag that will do what we want??? It would be nice to keep this simple... ħuman
17:28, 15 February 2009 (EST)
- yes, I'm thoroughly confused by the whole thing now. Totnesmartin 17:42, 15 February 2009 (EST)
As I thought, it looks like onlyinclude is what we want to use. It's the "missing" version of inclusion control I imagined surely existed. ħuman
17:55, 15 February 2009 (EST)
-.
- Onlyinclude does transclude the intro - but it makes the intro invisible in the actual article. Sod it, I'm going with the templates. Aylesburymartin (making a rare appearance) 08:36, 16 March 2009 (EDT)
[edit] Evolution headline
Quote: "The percentage of people in the country who accept the idea of evolution has declined from 45 in 1985 to 40 in 2005. Meanwhile the fraction of Americans unsure about evolution has soared from 7 per cent in 1985 to 21 per cent last year." To me, ignoring the fact that the time periods are slightly different, this means that most of the increase in undecideds have come from those who were previously rejecting evolution. In other words the percent rejecting evolution has declined from 48 in 1985 to 39 in 2005. (sorry I meant to put this on the WIGO page)
- I'm sure we've discussed this survey before, perhaps in a Ken context. Totnesmartin 14:18, 16 February 2009 (EST)
[edit] Cascading protection?
May I add cascading protection to the main page?--Ipatrol 10:56, 17 February 2009 (EST)
- Yes, but only during nighttime hours. Neveruse513 13:56, 18 February 2009 (EST)
- I don't think we really use cascade protection at all. We very rarely protect anything. This page has basic protection because vandals were habitually blanking it, but vandalising the individual templates happens much more rarely. Wèàšèìòìď
Methinks it is a Weasel 11:26, 17 February 2009 (EST)
When ex-users from Conservapedia come here, the main page is our welcome mat. If a template on it has been vandalised, it does not reflect well on us to the user, besides, only sysops even know about those templates.--Ipatrol 22:31, 17 February 2009 (EST)
- Our most basic policy is we don't protect anything unless it is essential. Unless someone is attacking the templates then why protect them. - User
22:33, 17 February 2009 (EST)
- Yeah, Ipatrol, sure. Except that those templates have never been vandalized. Also, we enjoy reverting vandalism, protection takes away that fun. ħuman
22:59, 17 February 2009 (EST)
- Who's "we"? I think it's a pain in the arse. Totnesmartin 13:46, 18 February 2009 (EST)
- I guess since I don't get to do it often I consider it fun. I don't get to because all you people east of my alleged time zone get to it first, I guess. ħuman
22:02, 18 February 2009 (EST)
- From our new(ish) RationalWiki: Community Standards:
- RationalWiki generally does not protect pages. The community feels that, given the ease with which vandalism can be reverted, protection is unnecessary. (An exception is that some sysops do protect their own userpage and signatures for peace of mind.)--Bobbing up 03:03, 19 February 2009 (EST)
[edit] This is ridiculous.
After reading only a few pages of this wiki, I am appalled at the stances you take, even daring to suggest that homosexuality is even SLIGHTLY natural. I am disgusted on that entire article you have created that encourages teenagers to have sex, rather than teaching them the true path of waiting untill the time is right. It is for that reason that I call upon God to bless you, and help you see the errors of your ways, and understand that it is only through accepting the narrow path of Him that you can be saved. I beg you, although I cannot force you, to turn around before it is too late and you meet His judgement, and face your eternity in Hell. — Unsigned, by: Shatoyaah C / talk / contribs
- Yes. . . are you a Poe? If not, are you Andrew Schlafly? And see if you learn a bit of tolerance. --"CURtalk 19:30, 18 February 2009 (EST)
- Why don't you write an essay explaining your position so we can
ridiculediscuss it? - User
19:39, 18 February 2009 (EST)
- If homosexuality isn't even "slightly natural", then where did the lesbian whiptail lizards come from? Bio-engineered by the Evil Liberal Science Conspiracy? Still, even if you are a troll, I hope you continue to post here. Here I can reply. --Gulik 19:46, 18 February 2009 (EST)
- If you look for anything too much, you can find an example of almost anything. Why God willed those animals to do what they do is beyond me, but I have no doubt that we will recieve that answer in due time. — Unsigned, by: Shatoyaah C / talk / contribs
- Shatoyaah, I appreciate your concern for my fictional soul and its alleged immortality, and I hope that your prayers for us make at least you feel better. As for myself, I am appalled at some of your claims and will work will whatever real means are at my disposal to enlighten you. ħuman
21:59, 18 February 2009 (EST)
- If there's an answer to find, we'll get it by SCIENCE!, not by praying for God to download The Truth into our brains. And please sign your posts with --~~~~. --Gulik 23:40, 18 February 2009 (EST)
If we could ban for ideology, this user would be next in the firing line. Go to cp: Shatoyaah C, this is not the website for you, further posts like this may be trolling--Ipatrol 23:06, 18 February 2009 (EST)
- And yet, we do not ban for ideology. I find Shatoyaah's beliefs to be woefully incorrect and pointless, but Shatoyaah is entitled to those beliefs. We can disagree, we can argue, but we do not ban just because someone has a differing mindset from us. If Shatoyaah wanted to edit an article, so long as it is not outright wandalism, I would not mind. If Shatoyaah came back just to argue and was willing to just debate, I would not mind. Freedom of belief, or lack of, even if I completly disagree with the belief. Javasca₧ In Soviet Russia, the demoralizing tomfoolery stinks YOU!
- Ipatrol, please refrain from telling people to leave the site, you do not speak on our behalf - in fact, you are in direct opposition to main page itself's invitation: "We welcome contributors, and encourage those who disagree with us to register and engage in constructive dialogue." ħuman
23:25, 18 February 2009 (EST)
- Unlike CP, when we say 'we don't ban for ideology', we MEAN it. We let Heart of Gold post.... --Gulik 23:40, 18 February 2009 (EST)
- The day we ban for ideology there'll be some sparks flying... ArmondikoVnarchist 16:09, 19 February 2009 (EST)
What I ask is for this user to agree to disagree, nothing more. Filling pages with pointless discussion with more heat than light helps no one. However, meaningful discussion and opposition is helpful in allowing us to evaluate and strengthen our arguments, like a devil's advocate. So I do welcome this user if he/she can provide us with rational arguments for his/her positions.--Ipatrol 18:48, 20 February 2009 (EST)
- And if they can't? Personally I welcome anyone wishing to debate us in a polite manner, even if I dislike their views. perhaps through debate they may reconsider. Perhaps you might. Totnesmartin 18:53, 20 February 2009 (EST)
- I'm keen for people to debate us in a funnymanner. 21:45, 20 February 2009 (EST)
- After reading the first section of this thread, I am appalled at the stance of the complainant, That someone who (apparently) believes in the supernatural boogies of God & Jesus should have the temerity to criticise the lifestyle of anyone else is beyond all poeishness (new word?). This is why godbotherers really get up my parts and make me want to smite them with plagues of frogs, toads & locusts. You cannot have a debate with anyone who's main argument is "godsedit" (that's a variant of goddidit). This is the sort of person who behaves as they do for fear of punishment, rather than through humanity and they make me sick.
and marmite 21:59, 20 February 2009 (EST)
[edit] Ta very much.
Dear RationalWiki,
Thank you. Thank you ever so much.
There's a nice big 'StumbleUpon' button I have in Firefox. And I click it every day (perhaps more than I should). And it just so happened that today the magic button took me to this place. Specifically, your article on 'Russell's Teapot'. 'What's this?' I thought, as a stranger in this strange wikiland. I read the article, and I soon smiled.
I have not smiled quite as widely as I did today for quite some time.
Thank you all. Thank you for hosting, writing and supporting a place where rational thought is promoted. Thank you, for making a wiki out of it. Thank you for having a (rather excellent) sense of humour on top, too.
But most of all, thank you for making me smile. =D
A very fine day to you all. --213.106.47.175 10:10, 19 February 2009 (EST)
- You're most welcome. Why not sign up and join in?
We don't (usually) bite.--Bobbing up 10:20, 19 February 2009 (EST)
- Yup, and signing up comes with a free group orgy on December 21st, 2012!
Javasca₧ In Soviet Russia, the demoralizing tomfoolery stinks YOU!
- Is that new accounts only? Totnesmartin 15:50, 19 February 2009 (EST)
- And for a limited period? Damn. I'll have to make do with my normal real-life orgies now... ArmondikoVnarchist 16:10, 19 February 2009 (EST)
[edit] Appearance
I've started using the Firefox speeddial extension which shows thumbnails of commonly accessed pages.
It strikes me that we have Main Page in big letters and it looks rather clunky compared with the other two. Although CP has a blank area which wastes space. I suggest that we remove the redundant Main Page text and move things north like WP.
ГенгисOur ignorance is God; what we know is science. 06:36, 21 February 2009 (EST)
- IIRC WP uses some kind of javascript hack or it's a new feature in MediaWiki which we don't have. I'll see what can be done. -- Nx talk 06:39, 21 February 2009 (EST)
- I was wrong, it's a css hack, I've added it to our Common.css. Clear your cache and refresh -- Nx talk 06:42, 21 February 2009 (EST)
- Great work. There's still a bit too much white space at the top for my taste but definitely an improvement. Thanks.
ГенгисOur ignorance is God; what we know is science. 06:58, 21 February 2009 (EST)
- Well spotted, I don't think I ever noticed it before. But does anyone know why the corners are rounded in FF but not IE, Opera or Safari? ArmondikoVnarchist 07:06, 21 February 2009 (EST)
- Because it's advanced CSS stuff that they don't support. FF supports it only because of XUL, but in a nonstandard way. -- Nx talk 07:09, 21 February 2009 (EST)
- I've added border-radius in addition to -moz-border-radius for the more css3 compliant browsers, it should now work on Safari I think. Could someone test it please? -- Nx talk 07:17, 21 February 2009 (EST)
- Excellent, I think that's just about right.
ГенгисOur ignorance is God; what we know is science. 07:28, 21 February 2009 (EST)
- And I can finally get the rounded box thingie on Safari. Word time Phantom! 07:41, 21 February 2009 (EST)
- This is the internet equivalent of a weekend at home rearranging the furniture. (I hate it when my wife says "I've been thinking".)
ГенгисOur ignorance is God; what we know is science. 08:56, 21 February 2009 (EST)
- Yeah, I had always noticed that Firefox rendered the rounded corners and Safari didn't. Being a diehard Mac user, I naturally assumed this was due to the superiority of the Mac platform. But now I too see the rounded corners. We are indeed Thinking Different! DogP 12:53, 22 February 2009 (EST)
- Next thing you'll be telling me Mac is superior because there's only one mouse button. Oh wait... :P -- Nx talk 13:02, 22 February 2009 (EST)
- What do you mean "only"? The rounded corners were probably not showing because they don't look very good, so OSX was making web design decisions for me, which I welcome. DogP 13:10, 22 February 2009 (EST)
- A joke, Apple claims one mouse button is better than two, because it's less confusing (BS), fortunately they'll soon replace the mouse with touch control. -- Nx talk 13:13, 22 February 2009 (EST)
- Ahem; Apple has been selling 3-button mouses since 2005, and the laptops can be configured to do a secondary click whenever you put both fingers on the trackpad and click. Word time Phantom! 13:19, 22 February 2009 (EST)
- I conveniently forgot those little details -- Nx talk 13:21, 22 February 2009 (EST)
- Come on, Nx, can't you spot a little tweaking from me?! Nah, I've been rocking only Macs or unix boxes proudly since the early Eighties. And using a three-button mouse on my Macs exclusively since the early Nineties. The old one-button mouse argument has always had about as much credibility as the "you can sharpen a razor blade under a model of the Pyramid of Cheops" story. DogP 13:32, 22 February 2009 (EST)
[edit] Maybe I missed it
Maybe I missed it or only mods could do it. But I would love to redirect Macbeth and Iago to the page of TK.--Tripcode 03:34, 25 February 2009 (EST)
- Anyone can redirect, it's just "#REDIRECT [[page_name]]", but I think we're trying to cut down on the amount of snarky and irrelevant redirects. ArmondikoVnarchist 08:33, 25 February 2009 (EST)
[edit] Entire site design beginning to look like we're run by nutjobsWhat's that new box at the top of this page, the beige thing with all the shite in it about use four tildes, etc.? Between that, the fucking stupid chalkboard, the two other boxes that tell you something or other but I can't be arsed to read them, the vast amount of legalese and mind-numbing copy at the top of the CP:WIGO page, our design is starting to look like that nutjob website we were checking out yesterday. I'm talking about
Can we PLEASE not start adding all these stupid boxes and panels, etc? Keep it simple and clean. Now, what are we going to do about it? I want to yank all those boxes, for a start. DogP 17:31, 25 February 2009 (EST)
- You're right, we're starting to look like a badly maintained wikiproject. We don't need all that wikipedia-style formalism, people know about wikis by now, and I'll bet that RW is nobody's first ever wiki experience. Clean away! Totnesmartin 17:38, 25 February 2009 (EST)
- True, very true. The talkpage bocks is def superfluous & the chalkboard's overkill here.
and marmite 17:41, 25 February 2009 (EST)
[edit] Example: All the boilerplate on top of WIGO@CP
This has been bugging me for ages. There is like a half screen wall of boring, in undifferentuated dull new roman font, before you get to the WIGOs. It is clear as the nose on my face that 99.7685476874687465243% of the people who visit WIGO@CP are there to lazily read them (and probably just the latest few) rather than aquaint themselves with the coma inducing minutiae of wikiality we all apparently find so fascinating. Open you mind to my superior aesthetic and you will see the truth.
- put all the how-to crapola at the bottom of the page, maybe include at targeted link to it on the top to keep my right makes might ADD from twitching.
- Move the way cool CP info box so that it is right justified and in the same block as the latest Wigos
- Replace said boiler plate with some terse, pithy and amusing expository verbiage.
Do it for the goat recipesMe!Sheesh!Mine! 18:22, 4 March 2009 (EST)
- Why don;t we discuss this at talk:WIGO CP? ħuman
19:22, 4 March 2009 (EST)
- We could probably do it for all 4 WIGOs actually. World and Blog definitely have some stuff there and Clog is minimal. Though I'm not sure about standardising them all because they have widely different rules and types of people who view/contribute. ArmondikoVnarchist 09:13, 5 March 2009 (EST)
[edit] Featured Picture
One thing that always gets me is their "Featured Masterpiece" on CP. I thought it would be cool if we could put "Le Pape Formose et Etienne VII", 1870, by Jean Paul Laurens as ours on the front page. It features Pope Stephen the VII putting his dead predecessor, Formose, on trial. I thought that it would be slightly on-mission because of the stupidity of the case.--Nate River 20:25, 7 March 2009 (EST)
- Yeah, maybe, but then what? And why do we have to imitate/try to outdo CP on our main page? Perhaps there is an article it could grace? ħuman
22:56, 7 March 2009 (EST)
- The featured artwork is one of the few things I like at CP. If we had something like that, we could also include stuff like political cartoons and keep things somewhat relevant to current events. As for the article, I have been planning on writing about the trail, but am doing serious, non-wikipedia research on the trial--Nate River 23:12, 7 March 2009 (EST)
- Yes, it is actually a genuine "content" - JM takes his art seriously. Sadly, he fails at page layout. Anyway, the only trouble with adding moar "featured" stuff to the main page is we don't have the peoplepower to keep it up to date - ie, refresh it once in a while. However, some of our "best of amusements" (as featured randomly on the main page) are images - any can be. How much work are you willing to endure to make this a worthwhile feature? ħuman
00:20, 8 March 2009 (EST)
- We should actually put more stuff we've made on the main page - randomising featured pictures a la cover story would be good. I sometimes wonder if we should have a "new content" slot too, which might bring people back regularly. Totnesmartin 15:56, 8 March 2009 (EDT)
You mean something like this? -- Nx/talk 16:23, 8 March 2009 (EDT)
- it'd be better to pick and choose new articles manually rather than just transclude "special:newpages" - My last effort was crap, and deffo not main space material of any sort. Perhaps something like Wikipedia's Did You Know section, which I'm completely biased about because I've got one coming up there... Totnesmartin 17:09, 8 March 2009 (EDT)
- I was just showing CUR that "ZOMG This is so complicated only Nx can do it" is not true. -- Nx/talk 17:14, 8 March 2009 (EDT)
- The random featured pictures idea sounds great. I can look after it if someone sends me a link to a help file.--Nate River 19:16, 8 March 2009 (EDT)
I think it would be good to have something visual on the Main Page. It looks quite dry & texty, which may be a turn off for visitors to the site. Featured images might be problematic, as we're not really an image-driven site. But it would be good to have some sort of picture, or at least a bit more colour on the Main Page. Wèàšèìòìď
Methinks it is a Weasel 20:11, 9 March 2009 (EDT)
[edit] Random featured article redux
How do you like this? (ctrl-r required because I had to add something to Common.css) -- Nx/talk 18:50, 8 March 2009 (EDT)
- I love it. Btw, ctrl-r was not required. --CoyoteOver 450 pages watched NOT including talk pages 18:52, 8 March 2009 (EDT)
- That's great. Totnesmartin 10:19, 9 March 2009 (EDT)
- That is pretty cool, the fade doesn't cover the whole article on the version of Mozilla I'm stuck using at the moment, I'll check it again on another machine later. ArmondikoVnarchist 12:55, 9 March 2009 (EDT)
[edit] Straw poll
Here's a lickle ickle straw poll on the new main page proposals above. Stick yer name where yer opinion lies, ladies! Totnesmartin 14:37, 9 March 2009 (EDT)
[edit] Featured picture
Should we have a Featured picture on the main page?
Yes please
and marmite (hoo's pickin' 'em?)
- Nate River says he'll do it, but there should be a debate process a la cover story nominations. Totnesmartin
--CoyoteOver 450 pages watched NOT including talk pages 14:49, 9 March 2009 (EDT) (Go business cat!)
No thanks
- I'm not too sure about this, it would break the page layout, as one half would become significantly longer, and the entire page would also become much longer. Then there's the question of which pictures we want to show and why. -- Nx/talk 14:58, 9 March 2009 (EDT)
- LOLcats, CP screenshots, and fossils perhaps? --CoyoteOver 450 pages watched NOT including talk pages 14:59, 9 March 2009 (EDT)
- LOLcats: we are trying to be half-serious, no? CP screenshots: pointless mostly, since you wouldn't be able to read the text and RW is trying to move away from CP; fossils: ?? -- Nx/talk 15:06, 9 March 2009 (EDT)
- Evolution. Maybe good wildlife pics (maybe)? Like the anole? Or the crab? Or that other anole pic on my user page? --CoyoteOver 450 pages watched NOT including talk pages 15:09, 9 March 2009 (EDT)
- Against — "Featured picture" sections only work on websites with substantial collections of pictures to work with, which hardly describes us. Conservapedia's featured images are all old paintings, and therefore in the public domain; Uncyclopedia's featured images depend on a small but dedicated community of photoshoppers (meaning all of their images are original, or at least altered enough to be fair use); and Wikipedia uses images taken, released into the public domain, and then uploaded by their massive editor base. Please note that any featured image system we could come up with would be nothing like these, thus raising serious fair use questions. (That is to say, while most of our images fall under fair use, within the context of the articles they are shown in, stripped of that context and flaunted on the front page just because we want to put an image up would make us little better than Conservapedia when it comes to copyright law.) Furthermore, is there an actual reason to have a featured image? I can maybe see such a feature added, if (1) the pictures were all immediately relevant to our mission goals, and (2) we had enough of them. I don't see either coming about easily for the time being, so I remain opposed to implementing such a feature.
Radioactive Misanthrope 17:21, 9 March 2009 (EDT)
- Against - RA pretty much covered my reasons and then some. ħuman
18:59, 9 March 2009 (EDT)
- Against. Unless we make an effort to put together pictures that make sense, no... Sterilewalkie-talkie 20:01, 9 March 2009 (EDT)
- Against. Despite all the picture we have non would be mainpage material. Also the mainpage nearly fits on one screen, which is nice, unlike WP and CP were you have to scroll down to so 60%+ of it. - User
20:37, 9 March 2009 (EDT)
- Yes, I always did like that about our main page. The Main Page is what we present to outsiders—no need to load it down with unnecessary material; keep it short and sweet.
Radioactive Misanthrope 20:48, 9 March 2009 (EDT)
- Against. DogP 17:36, 28 March 2009 (EDT)
[edit] New articles
Should we trumpet selected new articles on the main page?
Yes
No
and marmite (Cover story covers it)
- ħuman
(as at Toast, also, "new" articles, no matter how good they are, usually need work to be cover stories)
- not neededMei 19:18, 9 March 2009 (EDT)
- Against —per Huw above.
Radioactive Misanthrope 20:22, 9 March 2009 (EDT)
- Against DogP 17:38, 28 March 2009 (EDT)
Indifferent
- How long do you keep a new article thingy up there? Nah... Sterilewalkie-talkie 20:01, 9 March 2009 (EDT)
[edit] Nx's fade-effect on cover stories
I Like it
- --CoyoteOver 450 pages watched NOT including talk pages 14:49, 9 March 2009 (EDT)
- Totnesmartin 15:00, 9 March 2009 (EDT) iz teh
sufficiently advanced technologymagic.
- Mei 19:19, 9 March 2009 (EDT)
I Don't like it
and marmite (Sorry, Nx)
- Against - it really "adds" nothing. Let's try getting that "onlyinclude" thing working that I mentioned way up above first, to actively select exactly what gets quoted from the coverstories. ħuman
19:00, 9 March 2009 (EDT)
- How about my suggestion up there, with the lead section or if there's no lead section, the section named "cover" being included? -- Nx/talk 19:08, 9 March 2009 (EDT)
- I see no reason why we can't just use onlyinclude and pick exactly how much text we want from each article... ħuman
19:53, 9 March 2009 (EDT)
It doesn't work on my computer
- not installed yet — Unsigned, by: ConservapediaUndergroundResistor / talk / contribs
- This point is kinda moot, since it's done in a stupid way which won't work with all browsers. I'll try to do it in a more elegant way, however dpl is borking for some of the articles and leaving off closing tags, messing up the whole page. If you have Firefox 3 or Opera 9.5, it should work. If it doesn't, try ctrl-r. -- Nx/talk 15:02, 9 March 2009 (EDT)
- Works for me. --CoyoteOver 450 pages watched NOT including talk pages 15:03, 9 March 2009 (EDT)
Hey! I ordered a cheeseburger!
- Kan haz? ħuman
- Right on. As long as they are not cheese goatburgers. Sterilewalkie-talkie 20:05, 9 March 2009 (EDT)
- Goat-cheeseburgers would be nice, so long as it wasn't feta. - User
20:34, 9 March 2009 (EDT)
- wantz--
En attendant Godot"«I think like a genius, I write like a distinguished author, and I speak like a child. --V.Nabokov» 17:38, 25 March 2009 (EDT)
[edit] WP Plagiarism!
I`ve just found a half-assed website that calls itself an encyclopedia - all of it`s articles are plagiarised![1] -- 忍者 N I N J A A A H ! ! ! ! ! 17:35, 26 March 2009 (EDT) [{{subst:Unsigned|Ninja}} (this makes no sense - ħuman
)]
- It links to and acknowledges Wikipedia and the GFDL at the bottom of each article. AFAIK that is compliant. Taytopacket 14:24, 26 March 2009 (EDT)
- meh. There's a ton of WP mirror sites on the innerwebzez. Totnesmartin 15:47, 26 March 2009 (EDT)
- There's a lot of "fan-wikis" that basically say that the easiest way to expand is to copy and paste from WP because they're under the same liscence. It makes sense, but only if in the process they get rid of all the "in universe" style stuff from WP in the process. ArmondikoVnarchist 14:34, 28 March 2009 (EDT)
[edit] WIGO WND?
Should we make a special WIGo category for WND since we made one for aSK? Also, should we add the 4th Reich WIGO to the template, or are we not even bothering with neo-nazis anymore? ENorman 17:33, 28 March 2009 (EDT)
- RE: 4r, I think we decided to nuke it for two reasons - one, nothing is GO there, and two, having anything high profile linking to anything like MP was turning a lot of stomachs. As far as WIGO WND, is it turning up a lot on WIGO clogs or world? It would be easy to build, all we need to do is edit DeanS' CP borken news down to just WND items... they'll do all the work for us! I guess really we could have as many WIGOs as we want, but let's be careful of dilution (I could see ASK ending up in clogs if it gets boring), and not do high profile links if they aren't very active. ħuman
18:50, 28 March 2009 (EDT)
[edit] A little help.....
I was wanting to add WIGO aSK here but its a little outta my league. Someone wanna help? Something like "catch up with all the scientific ineptitude at_____". Ace McWickedRevolt 02:18, 31 March 2009 (EDT)
- Okay, I clearly dropped off the radar for a week too long, WTF is aSK? ...okay nevermind. I did the obvious thing and put it into search. ArmondikoVnarchist 14:31, 31 March 2009 (EDT)
- We're waiting a little bit (a couple more weeks?) to see if it has "legs". It will eventually go in the AotW main template, that's where that section of the mainpage is stored. ħuman
19:01, 31 March 2009 (EDT)
- Looking at something other than CP could be interesting. ArmondikoVnarchist 08:17, 1 April 2009 (EDT)
[edit] red floaty text thing
Do not want. Totnesmartin 06:12, 1 April 2009 (EDT)
[edit] Ed Rogers Neighborhood
[edit] Awful Popes?
A friend sent me this article by Zack Parsons, detailing the six worst popes to ever ascend to lead the Church.
Now that I think about it, the condom flap isn't that bad. [/sarcasm] == CodyH 18:54, 23 April 2009 (EDT)
[edit] Fundies say the darndest things
Could someone add this to rationalwiki, top 100 of fundamendal quotes: one denying suns existance and so on...great stuff
"If you mean that men have ever been animals you are 100 percent wrong. No evidence under the sun can prove that I was ever my pet cat. ET can happen within a species but not between species."
- I'm reasonably sure a lot of those are parody. Just something about the way they're said looks fake. Totnesmartin 17:28, 28 April 2009 (UTC)
thats the saddest thing, they really said that, ive seen those in few pages and theres citations, RIP rationality
[edit] Disclosure of Personal Information
Moved to RationalWiki talk:Community Standards#Disclosure_of_Personal_Information
[edit] Buddhism comment moved to Talk:Buddhism#Buddhism
[edit] It's no more than they deserve - Caution! Contains goats!
It's one of those rare moments when I think that there just might be a god![2] — Unsigned, by: Mick McT / talk / contribs
[edit] Two years on...
This amazing discussion has been moved to the bar down the road
[edit] Fun and Recipes
Should the link "Fun" and the link "Recipes" on the main page goes to namespaces instead of "Page starting with"? (it was fixed in the Category page) Thieh 14:08, 4 May 2009 (UTC)
[edit] Quote generator as one of the cover templates?
Should we have a place on the front page to put perhaps a combined quote generator, a grand combination of Kenquote, Assquote and Swabquote? Thieh 00:01, 5 May 2009 (UTC)
- I don't think so. What purpose would it serve other than to make a mess? ħuman
01:36, 5 May 2009 (UTC)
- Way too in joke for the main page. - π 01:39, 5 May 2009 (UTC)
- I agree with the irrational number. Sterilewalkie-talkie 01:49, 5 May 2009 (UTC)
[edit]
Hey, paste these into your this and you'll get more buttons. I just had to write in a ref by hand )= Tarantallegra 05:24, 18 June 2009 (UTC)
- 1) If you type fast enough, it is quicker to type it by hand than stopping to click a button. 2) Look below the edit screen, it is down in the markup box. - π 05:34, 18 June 2009 (UTC)
- Yeah... I'll do that. But it would be nicer just to have it. When are you going to get a WYSIWYG editor? Tarantallegra 05:43, 18 June 2009 (UTC)
- WYSIWYG is for wimps. Now excuses me I some texing to do. - π 05:45, 18 June 2009 (UTC)
- We have WikEd which is almost WYSIWYG. Look in the gadgets tab of preferences -- Nx / talk 05:47, 18 June 2009 (UTC)
- Okay, well you're going to get that a lot since wikia went to it (; I help out on several wikis, and I happen to know it would take you about 10 min to annoy the hell out of all the non-wimps. But I understand if you don't want to do it, 'cause it's sooo horrid to see that little editor link looking at you round that bathroom door when you're just trying to edit some source in peace. Nice template sig.... cool. Tarantallegra 05:55, 18 June 2009 (UTC)
- WikEd is nice and I keep it on (albeit disabled by default) for search and replace mainly, but the loading time can be horrible, and it can screw up the code, especially when you copypaste. All the other WYSIWYG editors I know of are mediawiki extensions and experimental/beta, so I don't want to mess with that. -- Nx / talk 06:14, 18 June 2009 (UTC)
- Yeah, FCK is not really completely worked out, but I guess if wikia is using it then it can't be too terrible. It's getting pretty good except it ruins comoplex code and you can't easily edit inside of templates on the page. I can see where you'd not want to bother. Tarantallegra 08:01, 18 June 2009 (UTC)
[edit] What is going on?
Right now all I am seeing is "there is no text on this page." MIP has actually signed in - 00:57, 23 June 2009 (UTC)
- Restoring lost edit. All better now. ħuman
04:15, 23 June 2009 (UTC)
[edit] GFDL to CC-BY-SA
GFDL and CC-by-sa 3.0
Hi! The community here should be aware that Wikipedia and other Wikimedia projects have voted to switch licenses to Creative Commons-Attribution-Sharealike 3.0 from the GFDL. This is possible because contributions are licensed under "GFDL 1.2 or later versions", and the GFDL 1.3 has a provision to allow wiki projects to switch to Creative Commons. However, according to the terms of the GFDL 1.3, switching licenses is only possible if done before August 1, 2009.
Because Wikipedia is changing licenses, it will no longer be possible to move content to and from Wikipedia (without violating the licenses) unless this project also changes over to CC-by-sa. I don't know whether that is a possibility, since it appears that your content is GFDL 1.2 only, not "GFDL 1.2 or later versions" like on Wikipedia. However, you might have some wiggle room since all your links to the license just point to the current GFDL at gnu.org, which is now version 1.3. In any case, your community should be aware that if it doesn't switch to CC-by-sa, importing Wikipedia content will not be legal
See for more information.
- Thank you for the information, we are currently discussing the switch on site at Debate:Should RationalWiki switch to CC-BY-SA 3.0?. I think the language that it would be illegal to use Wikipedia content is a bit strong. CC-BY-SA provides for licensing under "similar licenses." tmtoulouse 23:25, 29 June 2009 (UTC)
- True. I guess what I meant was that Wikipedia would not be able to use your content if it was only GFDL — Unsigned, by: 161.184.204.26 / talk / contribs
[edit] Jeeves Mark 2 AKA coward
Jeeves has basically been waiting on the sideline for months to have a legitimate reason to throw me in the vandal bin. Jeeves - Go fuck yourself you PRICK. MarcusCicero (talk) 20:00, 11 July 2009 (UTC)
- I think even you would agree you're usually just an asshole, not a vandal, so I'm not sure what you think he was waiting for because your recent behavior seems new-ish to me. So did Jeeves get a legit collar or not? You know you're likely to get out of the vandal bin as soon as whatever has you acting out the last few days passes.
User:Nutty Roux/sigtalk 20:05, 11 July 2009 (UTC)
- If he has a "legitimate reason" to throw you in the bin, then what's the problem? --Kels (talk) 20:06, 11 July 2009 (UTC)
- If you don't want to be in the vandal bin, than quit trolling and do something useful. Do this at least once in a while, and then maybe you'll have a reason to complain. Otherwise, shut up.The Goonie 1 (talk) 20:10, 11 July 2009 (UTC)
- Yes, well. I'm sure your buddies at your secret laughing at RW forum, and your girlfriend who goes to another school will comfort you in your hour of need. --JeevesMkII The gentleman's gentleman at the other site 20:30, 11 July 2009 (UTC)
- Don't rise to it dude, this is what they all want. The bad thing with the vandal bin reducing the rate that people can edit at, is that it means they have to think very carefully about how much annoying, offensive and provoking shit they can cram into just one edit.
narchist 20:36, 11 July 2009 (UTC)
(new section left by MC's brother (?) moved to user talk:MarcusCicero)
[edit] Grow up and get jobs)
thanks--yes they do get under my skin :) RJJensen 21:30, 19 April 2009 (EDT)
Just an excerpt from Richard Jensen's talk page. You people are fools and little kiddies with nothing better to do with your time and your lives. Grow the fuck up and stop harassing another website for no fucking reason (other than your perceived intellectual superiority). Or else spend your time on 4Chan. Because you people are completely pathetic. You are vandals and fucking trolls who need to get girlfriends and who need to get a fucking life. MarcusCicero (talk) 15:17, 11 July 2009 (UTC)
- Normally I wouldn't feed the troll, but what the hell? It's the weekend and I'm up early... So MC, tell us, do you take an aggressive posture online to cover your own real life insecurities? OR are you just a prick? SirChuckBOne of those deceitful Liberals Schlafly warned you about 15:25, 11 July 2009 (UTC)
- No, I have a job and a life. I'm a prick when it comes to dealing with people who in my opinion barely deserve to be called human beings MarcusCicero (talk) 15:28, 11 July 2009 (UTC)
- Beat me too it, SirChuck. I have to laugh at how he chuggs down TK's coolaid and assumes that everything bad that happens on CP (that isn't caused by Andy or TK, of course) comes from us. Hopefully, MC also vents his spleen on EBaumsworld, 4chan (although I doubt he gets past the /y/ channel), ED, UC, etc., etc. and every other site out there that points at his beloved "encyclopaedia" and laughs. --PsyGremlinWhut? 15:34, 11 July 2009 (UTC)
- I dunno, even though it's the weekend, I still wouldn't feed the troll. Trolls are obnoxious.The Goonie 1 (talk) 15:37, 11 July 2009 (UTC)
Just to sum up, MC feels that we complain too much on the internets, and his solution is to complain on the internets. Z3rotalk 15:39, 11 July 2009 (UTC)
- Go figure. Gotta love hypocrisyThe Goonie 1 (talk) 15:42, 11 July 2009 (UTC)
- So MC, you have a job, and a life. Yet every three weeks or so your surface around here to bitch and moan..... Maybe your boss should give you more to do..... By the way, you're so brave that when someone responded to your posting, you tried to burn it... Way to take a stand on your convictions... You really are a sad little man MC, a sad little man. SirChuckBOne of those deceitful Liberals Schlafly warned you about 15:51, 11 July 2009 (UTC)
I'd love to have a job right now, but the way the economy here is right at the moment, hardly anyone's hiring temps over the summer so I've had no luck. Seriously, the market for temporary clerical work is total crap right now. As far as growing up, I'm going to school to learn how to make cartoons and draw comic books for a living. Growing up is not really a required job skill. --Kels (talk) 16:01, 11 July 2009 (UTC)
- As far as getting jobs goes (for me), I probably have a cooler job than Marcus Cicero does. In fact, I guarantee that he wishes he got to do/see some of the cool things I get to with my (low level) security clearance. As far as growing up, I don't wanna grow up, I'm a Toys-R-Us kid!!!!The Goonie 1 (talk) 16:06, 11 July 2009 (UTC)
- Hmmm, interestingly, I have a job (a pretty damn good one) and a girlfriend/fiancee. I've never vandalised Conservapedia (I will admit to one or two accounts that added some wierd stuff, but it was stuff that they by and large agreed with). I assume it wasn't addressed to me, then. Never mind. But still, seriously, why do these people blame Rational Wiki when the actual majority of real "trolls and vandals" (as opposed to those who were just out to challenge Andy and Co.'s more bizarre beliefs and therefore labelled as such) operate out of 4Chan, Encyclopaedia Dramatica and Uncyclopedia and more where they genuinely are little kids with nothing better to do.
narchist 16:39, 11 July 2009 (UTC)
...
FlareTalk 16:47, 11 July 2009 (UTC)
- Flare, would you piss off with posting these everywhere? It's getting annoying. --Kels (talk) 17:18, 11 July 2009 (UTC)
- Yup, knock it off NF, please. This message brought to you by:
respondand honey 17:27, 11 July 2009 (UTC)
- Especially since this arguement was over and done with and MC was gone when that was put up.The Goonie 1 (talk) 17:29, 11 July 2009 (UTC)
HAHAHA!!! That was pretty funny Z3ro!The Goonie 1 (talk) 17:32, 11 July 2009 (UTC)
Well, alright, no more template, but stop giving me reasons to use it. FlareTalk 17:36, 11 July 2009 (UTC)
- Who needs stopsigns when humor will suffice?!The Goonie 1 (talk) 17:38, 11 July 2009 (UT.— Unsigned, by: MarcusCicero / talk / contribs
- Can we please end this whole stopsign arguement? Otherwise, it might create the need for one, and that would be self-defeating, wouldn't it?The Goonie 1 (talk) 17:46, 11 July 2009 (UTC)
- This really doesn't belong on the main page... Perhaps we should move this to the bar where we can continue over drinks. SirChuckBOne of those deceitful Liberals Schlafly warned you about 20:56, 11 July 2009 (UTC)
- You know, hitler thought the same exact way. Web (talk) 21:12, 11 July 2009 (UTC)
[edit] Convienient edit for the nonsense section
- OH OH OH, Now that we've made MC looks stupid are we just dropping random out of context quotations? I'll go next, I'll go next:
My turn! Here's a favourite:--Kels (talk) 20:27, 11 July 2009 (UTC)
I think this is about as irrelevant as you can get.... perhaps. Penguin.
narchist 20:30, 11 July 2009 (UTC)
I think I can go one better.The electrocutioner (talk) 22:17, 21 July 2009 (UTC)
[edit] Protecting my brother from ridicule
I just want to protect my brother from ridicule. Please help me remove these troublesome edits. After I remove these main page talk edits I will disappear. MarcusCicero (talk) 21:52, 12 July 2009 (UTC)
- You're free to leave, but you won't be removing any talk page edits. Just wait a few days and they'll be automagically archived anyway... ħuman
23:29, 12 July 2009 (UTC)
- Another last post?
ГенгисRationalWiki GOLD member 23:37, 12 July 2009 (UTC)
[edit] Sorry :(
The events of the last couple of days were bizarre, even by my standards. What do I have to do to loose the troublesome classification of 'troll'? — Unsigned, by: MarcusCicero / talk / contribs
- I'm not sure that's possible--I'm tempted to do it myself (as you haven't threatened to ass-rape anyone like Fall Down nor are you CUR-like in your behaviour), but it would seem as though consensus is against that course of action. Most of what you did/said was on talk pages, and was obnoxious, sure, but so what? The mob seems to have decided to keep you binned for now, and while there may be arguments that that is unfair, it seems to be the way things will stay for the next little while--even though it means we're treating you worse than we treat TK. TheoryOfPractice (talk) 14:14, 13 July 2009 (UTC)
- Except we are not, TK isn't binned because he is not actively editing the cite. TK was blocked permanently for the kind of trolling MC did. Hell, I 404 blocked TK from even accessing the site. Take a break from the wiki MC, that is your best bet, at least for a few days. tmtoulouse 14:17, 13 July 2009 (UTC)
- And, when you return, try not adding posts which say that you're masturbating over other editors - really. Silver Sloth (talk) 14:22, 13 July 2009 (UTC)
I have no problem in taking a long break - if you look at my editing patter I'm only ever active for a few days at a time before disappearing - but I don't want the vandal bin hanging over me whatever time I decide to wander back. And I also dislike being labelled a troll. I'm somewhat troublesome, but surely you need someone to point out the holes in your ego?
And you have got to admit - you people calling anyone a troll considering your WIGO entries is priceless irony and hypocrisy. MarcusCicero (talk) 15:15, 13 July 2009 (UTC)
- Has it occurred to you that we are actually not all the same person? Wėąṣėḷőįď
Methinks it is a Weasel 18:19, 13 July 2009 (UTC)
- Totally not a troll. For real. Not a single post in the past few days one could consider trolling in any way. --Kels (talk) 19:06, 13 July 2009 (UTC)
- totally troll. How do you expect to lose the troll label if you sign on, get abusive, then pretend to be someone else who also gets abusive while making up some story about MC having a mental illness? Definite trolling but also totally childish, stupid, boring and idiotic. Ace McWickedi9 20:26, 13 July 2009 (UTC)
- I've really no idea what you all are talking about and have only decided to post because I want to call someone a "wanker" and this seems like as good a place as any to find that person. So which one of you is it? Me!Sheesh!Mine! 18:23, 16 July 2009 (UTC)
- Me! I have never had the pleasure of having that expletive tossed at me. tmtoulouse 18:24, 16 July 2009 (UTC)
New to Wiki: Like the title says I am new to wiki, well writing anyways... Could somebody help. I sound like idiot I know, but the only way to learn... — Unsigned, by: Hanson135 / talk / contribs
[edit] 10 top pages of wikipedia and conservapedia watch it, wou wont be surprised homosexuality is first, homosexuality and parasites is the second homosexuality and violence 3. and so on and so on, whats wrong with fundies, they must be quite kinky ppl 91.153.59.106 07:41, 8 August 2009 (UTC)
- This is for discussing the mainpage, Conservapedia talk belong over at Conservapedia Talk:What is going on at CP?. If you posted it there you would have been told two things. That list is more than 18 months old, and someone may have pointed out it was our page bump operation that created it. Please don't be put off by our apathy though. Why not get an account after you have lurked for a bit. - π 01:48, 9 August 2009 (UTC)
[edit] Protection???
I notice Emperor protected Template:RationalWiki MainPage/About and Template:AOTW Main and I don't see why. Checking their histories shows roughly one act of heinous wanadlism per month or so (if even that). ħuman
01:01, 12 August 2009 (UTC)
- I just released them. Not really necessary. It is not like the mainpage which was getting blanked every day. Do we want to try unprotecting it again? - π 01:43, 12 August 2009 (UTC)
- I just figured that the templates posted on the main page ought to at least be uneditable by IP's. After all, the Main Page itself is protected, so why shouldn't its components be protected? It just seemed to make more sense that way. --The Emperor Kneel before Zod! 02:44, 12 August 2009 (UTC)
[edit] Pictures
Someone added a picture to the top of herbal supplement without no including it so it appeared on the front page. It actually looked kind of nice, except that it was a generic field of flowers so looked a little bit on the cheap side. If there is an appropriate on the top of a cover story article should we allow it to be included on the mainpage if it looks nice? - π 02:38, 12 August 2009 (UTC)
- I noticed the picture and I actually liked it enough to think that we should try and do it for most of our cover stories. tmtoulouse 02:44, 12 August 2009 (UTC)
- I'll put it back then. Do you really want a picture of Schlafly when you came to the main page? - π 02:46, 12 August 2009 (UTC)
- Better undo my edit, I noincluded it again, Ken. Oh, but see my edit comment - we should make a "coverstorytest" template or some such that can be forced to transclude whatever we are testing, then edit the main page, change cs to cstest and use preview to make sure things play nicely. And, yeah, it were sum purdy pikcher. It luked nais. ħuman
03:22, 12 August 2009 (UTC)
- Wait, you guys actually look at the main page? ħuman
03:23, 12 August 2009 (UTC)
- I used to have one of those in my sandbox until I deleted it. - π 03:25, 12 August 2009 (UTC)
- If we cut down on the "more featured content" box, we'd have more than enough room to expand the featured articles. WP's featured stuff often has pictures.
narchist 10:16, 12 August 2009 (UTC)
- Okay, I managed to refresh it enough times to get herbal supplement up. I like it. Although I reckon it would be best off left-aligned rather than right, I assume just sticking "|left" in the includeonly tags would do that fine. It's also a little large on 1024px resolution.
narchist 10:23, 12 August 2009 (UTC)
- See this is what I am talking about. How horrible is that? - π 11:28, 12 August 2009 (UTC)
- You do make a point, I like the illustrations there but that one would give kids the fright of their lives.
narchist 11:32, 12 August 2009 (UTC)
- Do you reckon we should drop the thumb when it is included? - π 11:33, 12 August 2009 (UTC)
- On the bright side it might piss TK off. - π 11:34, 12 August 2009 (UTC)
- The picture is too big, too Kennish. A small image more in line with the size WP uses would be better.
ГенгисRationalWiki GOLD member 11:41, 12 August 2009 (UTC)
- I tried dropping the thumb like with WP's featured article (that's the test 2 edit) and it didn't look anywhere near as good as I thought it would be. I didn't screengrab it though.
narchist 11:43, 12 August 2009 (UTC)
- Probably no bigger than 200px on the mainpage, I think. We need to test these somewhere. - π 11:45, 12 August 2009 (UTC)
- This looks better. - π 11:48, 12 August 2009 (UTC)
- Better to have straight images without the thumb box.
ГенгисRationalWiki GOLD member 12:21, 12 August 2009 (UTC)
[edit] WIGO aSK
I'm not sure this really should still be on the main page, there's really nothing going on there these days. ħuman
21:26, 2 August 2009 (UTC)
- Perhaps it could be rotated with something like WikiSynergy - do we need to start another WIGO for that? Lily Inspirate me. 21:28, 2 August 2009 (UTC)
- It has been pretty interesting lately and I am about to get blocked for not retracting my statement after calling Jonathan Sarfati a "moron". Ace McWickedModel 500 21:38, 2 August 2009 (UTC)
- It is hardly front page material though, a wiki-based blogopedia more obscure than CP. - π 22:00, 2 August 2009 (UTC)
- Yes at Pi, my thoughts exactly. Maybe it should even be folded into wigo clogs... And as far as WS (wikisynergy), um, right now "recent changes" three editors - billy shears, me, and wikademia, discussing meta-meta-policy and stuff mostly. Although it could explode at any day. ħuman
22:43, 2 August 2009 (UTC)
- Seriously Ace, you could at least have given Philip a choice. After all, Safarti might not be a moron, he might be a liar! --Kels (talk) 23:40, 2 August 2009 (UTC)
- How about "uneducated twit"? Ace McWickedModel 500 23:53, 2 August 2009 (UTC)
- Well, either he totally misunderstands evolution even though the information out there is easily found (moron) or he deliberately misrepresents it even though he knows it's not true (liar). I think your suggestion comes under the former. --Kels (talk) 23:55, 2 August 2009 (UTC)
- I hate to point it out, but WIGO ask is more popular than WIGO blog. On the other hand, with Bradley not present, it does seem to be a WIGOing of people here, and, well, Philip, which does seem gratuitous. Sterile okra 00:11, 3 August 2009 (UTC)
I am enjoying it at the moment. The only problem is I am rubbish at online debating. In real life I am good debater but I take my cues etc from gestures and expressions so is difficult for me via the web. Ace McWickedModel 500 00:28, 3 August 2009 (UTC)
- Sticking your fingers up and waving your genitals around, that kind of thing? Jaxe (talk) 04:26, 7 September 2009 (UTC)
[edit] srry, wasn't able to access for a week or so, but...
About a week and a half ago, I wasn't able to access this wiki until today, getting the "time out" error until today. I thought it was my computer's problem so I used the other computers at home and got the same error. I tried using the campus's computers and my college apartment's computers and got the same result. When I checked the WIGOs, all of them are messed up (with the exception of the Blogosphere area). What on earth happened?--Dark Paladin X (talk) 23:48, 6 September 2009 (UTC)
[edit] favicon
She is small, but she is borken, captain! ħuman
06:54, 8 September 2009 (UTC)
- ". . . originally wicked . . . They will not reflect that circumstances changed them . . . At home these men had no cause to show their natural savagery . . . they were suddenly transplanted to Africa and its miseries. They were deprived of butcher’s meat & bread & wine, books, newspapers, the society & influence of their friends. Fever seized them, wrecked minds and bodies . . . until they became but shadows, morally & physically of what they had been in English society . . . Home people if they desire to judge fairly must think of all this."
- --Henry Morgan Stanley
State of North Dakota v Mimsy Butterpie the Unicorn, December 30 1939, The Right Honorable Judge Daniel Mooncake Presiding. 07:30, 8 September 2009 (UTC)
[edit] It's back!
Yay Wehpudicabok 07:54, 8 September 2009 (UTC)
- Wrong, wrong, wrong. The reason that foreign aid isn’t working isn’t just because Third World leaders are often corrupt (though, obviously, that doesn’t help matters). The reason foreign aid isn’t working is because Western leaders use foreign aid as a foreign policy “carrot” to get what they want from other leaders instead of just sending it to where it’s needed most. So, even though America spends a lot of money on foreign aid, a good deal of that money is spent on countries like Egypt or Israel or Peru that don’t really need aid as much as some ultra-poor countries do. It’s not just that the money is being used inefficiently; it’s that its suppliers are distributing it inefficiently. Oh, and incidentally, they don’t make too much in the way of “sacrifices” to send that aid, either. Don’t believe me? Well, let’s take a look at the stats. The U.S. spends around $40 billion on foreign aid per year. Let’s see how much money the U.S. spends annually on defense—OH SNAP $515 BILLION!!! Still think that the first world couldn’t do more if it tried a bit harder? State of North Dakota v Mimsy Butterpie the Unicorn, December 30 1939, The Right Honorable Judge Daniel Mooncake Presiding. 08:07, 8 September 2009 (UTC)
[edit] I'm a creationist and what is this?
This discussion has been moved to Conservapedia Talk:What is going on at CP?#I'm a creationist and what is this? 12:00, 15 September 2009 (UTC)
[edit] Theemperor the Whinge bag
It seems Theemperor has taken my digs a little too much to heart. Why the hell is he even kept here? He reverts edits, protects pages, generally acts the right little internet fascist. He reminds me of RA, but much easier to wind up. Get rid of him if you know whats good for you. MarcusCicero 17:35, 18 September 2009 (UTC)
- Because he is a regularly contributor and we are a broad church. - π 03:49, 19 September 2009 (UTC)
- But if we get rid of Theemperor, there won't be anyone here left for me to have a power struggle with.Lord of the Goons The official spikey-haired skeptical punk 04:28, 19 September 2009 (UTC)
- "Kept here"? You are "kept here", in spite of the fact that you never say anything useful, you just parade random accusations and when asked, have no "diffs" to provide. We are all "kept here". What procedure do you suggest? That we somehow kick the Emperor Penguin off our ice flow? And then what? ħuman
07:00, 19 September 2009 (UTC)
[edit] Samhain
Nice pumpkin pics every time I check recent changes, but... isn't Hallowe'en/Samhain tomorrow night? Fox 13:32, 30 October 2009 (UTC)
- Depends where you are. - π 13:35, 30 October 2009 (UTC)
- Plus, it already is to to tomorrow in Aus and NZ, see?
- I believe it switches date at GMT (UTC) + and - 12 hours to allow for the Antipodeans and Hawaiians among us? I am eating
& honeychat 13:43, 30 October 2009 (UTC)
- OK. (But Samhain is April 30 for Aussies and New Zealanders.) Fox 13:49, 30 October 2009 (UTC)
- Isn't that Walpurgisnacht? (memories from 50+ years ago - probably wrong
) I am eating
& honeychat 13:58, 30 October 2009 (UTC)
- That's in the spring. --Kels 14:00, 30 October 2009 (UTC)
- wp:Walpurgis Night April 30th. I am eating
& honeychat 14:00, 30 October 2009 (UTC)
- Is April considered spring in the southern hemisphere? Seems odd that they would have seasonal festivals exactly the same as those in the northern hemisphere, rather than reversing them. Fox 14:07, 30 October 2009 (UTC)
- It's (Samhain) Celtic so (presumably) relies more on its cultural heritage than fact. Really, I dunno - an Antipodean'll have to tell us. I am eating
& honeychat 14:13, 30 October 2009 (UTC)
- Wikipedia says that Samhain is 30 Oct-1 Nov in the Northern hemisphere and April 30-1 May in the Southern. Since it's a Celtic holiday, I'd be inclined to go with the former for a worldwide wiki. –SuspectedReplicantretire me 14:18, 30 October 2009 (UTC)
[edit] Cover Stories
I think we need to find more cover story candidates. I am sure I only ever see one of maybe 4 or 5 when ever I visit. Aceof Spades 20:48, 6 December 2009 (UTC)
- I only look at the main page when someone says something is borken on it. ħuman
01:57, 7 December 2009 (UTC)
[edit] New color scheme?
Not sure this was that great an idea. Did we discuss doing this somewhere that I missed? ħuman
22:09, 10 December 2009 (UTC)
- Weaseloid suggested it on the logo brainstorm page. I like the idea, but the colors are too ... colorful. Perhaps it should be a bit more subdued, and only vary slightly from the current fixed colors -- Nx / talk 22:11, 10 December 2009 (UTC)
- I thought we were talking about background colors there, maybe I missed that part. I don't think we want the main page color scheme slowly changing over time for no explicable reason. Also, if we do decide to do it, the supporting template should be in the RW space, not userspace, I think. ħuman
22:13, 10 December 2009 (UTC)
- I like it. It's nice to have these unexpected minor tweaks once in a while. Totnesmartin (talk) 22:15, 10 December 2009 (UTC)
I did discuss it first, including the suggestion that it be used on main page. Since there was very little feedback & the discussion petered out, I went ahead with it, in the hope of getting some less apathetic response here. If it isn't popular on the maybe it could be used on some of the box templates (welcome, delete, mission, etc.) instead (or as well?). Re the namespace, I'm happy to move it out of my userspace & will do so this weekend. Wėąṣėḷőįď
Methinks it is a Weasel 18:44, 11 December 2009 (UTC)
[edit] Protection
It has been protected again, can someone undo it? - π 22:10, 10 December 2009 (UTC)
[edit] Stupid use of site messages is stupid.
I get that the RationalWiki social scene has always been rough and tumble, but why are users now being abused over the intercom? Don't spread the bickering as far and as wide as possible. IMHO.
Anyways, and why is it visible to unregistered users? I was not logged in and saw it. There are at least a few articles around that try to be serious, and such a message is shooting RW in the foot. Icewedge (talk) 07:01, 11 December 2009 (UTC)
- Yeah, one more place for this shit to take place! - π 07:08, 11 December 2009 (UTC)
- It's visible to anons because the site-wide urgent group is designed that way. A bureaucrat can hide it. -- Nx / talk 09:26, 11 December 2009 (UTC)
- I just did, incidentally.
narchist 09:30, 11 December 2009 (UTC)
- Wow, cool, how did you do that? ħuman
21:38, 11 December 2009 (UTC)
- It's all on the Intercom page.
narchist 14:20, 12 December 2009 (UTC)
- It is? All I see are the last 50 messages... maybe I'm not looking carefully enough. Or maybe I'm not a 'crat? S'possible, I haven't tried to do any 'cratting chores lately... ħuman
22:56, 12 December 2009 (UTC)
- There's a "mark as read for anonymous users" link on Special:Intercom for messages sent to the urgent group. I think the link disappears once you hide them. -- Nx / talk 07:56, 13 December 2009 (UTC)
[edit] Conservapedia has a page on Vicipaedia
[3] In case anyone here is interested. --70.29.37.90 (talk) 22:51, 11 January 2010 (UTC)
- First TVTropes, now this... Totnesmartin (talk) 22:56, 11 January 2010 (UTC)
[edit] Just noticed ...
...('cause I don't look @ the Main Page often) ... whoever did the layout did a good job - I love how the two columns are the same height at whatever width you move the page. CP could learn from that
. I have just eaten
& stiltontalk 16:15, 12 January 2010 (UTC)
[edit] Haiti Earthquake
Please, please feel free to improve the rather rushed addition. And feel free to remove it in due course. Bob Soles (talk) 17:06, 13 January 2010 (UTC)
[edit] my sysopship
where's my sysopship, dudes? YourEnemy? (talk) 02:51, 15 February 2010 (UTC)
- How about you do something and we might care enough to give you one. - π 02:54, 15 February 2010 (UTC)
[edit] Invisible wiki
Rational wiki pages #never# appear in the first batch of a webspider search. — Unsigned, by: 212.85.6.26 / talk / contribs
- And? Totnesmartin (talk) 18:03, 2 February 2010 (UTC)
- Oh, no, now no one will ever find this place! Thank you, unsigned numbers, we'll close RationalWiki immediately. (Thankfully, Ken's idea of legitimacy only works for nutters, not for rational people) --Irrational Atheist (talk) 18:06, 2 February 2010 (UTC)
- Spam. — Sincerely, Neveruse / Talk / Block 18:06, 2 February 2010 (UTC)
- Spam backwards is mapS. --Irrational Atheist (talk) 18:08, 2 February 2010 (UTC)
Rationally RW is perceived as irrelevant - however much you ham it up. And spam backwards is mɒqs. — Unsigned, by: 212.85.6.26 / talk / contribs
- Who cares? And please sign your posts, like this: ~~~~ Wėąṣėḷőįď
Methinks it is a Weasel 18:19, 2 February 2010 (UTC)
- "perceived as irrelevant" is an anagram of "real deviancies pervert" --Irrational Atheist (talk) 18:22, 2 February 2010 (UTC)
Where is the redirect to Anagramwiki? — Unsigned, by: 212.85.6.26 / talk / contribs
Beware the Earwigs that eat Irrelevant Articles. — Unsigned, by: 212.85.6.26 / talk / contribs
The point is not to describe the situation but to change it.
There is a spectre haunting Wikiland - and it is not Rationalwiki.
What is BON? — Unsigned, by: 82.198.250.7 / talk / contribs
- Bunchanumbers - somebody who hasn't signed in, like you. Or me for short. Bondurant (talk) 17:04, 10 February 2010 (UTC)
- Or this. Totnesmartin (talk) 18:48, 10 February 2010 (UTC)
Why not allow continued reworkings of Marx's aphorisms? — Unsigned, by: 212.85.6.26 / talk / contribs
This wiki should be made more useful than a sonic screwdriver in an anechoic chamber. — Unsigned, by: 82.198.250.3 / talk / contribs
Still almost invisible (one entry, a number of pages down): and many articles seem to have had the sentences linking them to RW's premises to debunk 'theories mad, bad, weird, incorrigibly stupid or just plain sky-clad quirky' removed/allowed to go down the plughole. — Unsigned, by: 82.198.250.70 / talk / contribs 17:26, 10 March 2010 (UTC)
- Sorry, but why do you think that this should be of interest to us? 17:45, 10 March 2010 (UTC)
ContribsTalk
The definition of what RW #should# be? Or that RW articles appear to have no connection to the mission statement?. Or is it the intent that nobody else look in the RW fishbowl? 212.85.6.26 (talk) 16:24, 11 March 2010 (UTC)
[edit] Article of the weak
The Earwigs of Indifference want a new one. — Unsigned, by: 82.44.143.26 / talk / contribs
They still want a new one.
As do the Ants of the Apocalypse.
[edit] Sophia
likes you, unless you have potato blight. — Unsigned, by: [[User:|User:]] / [[User talk:|talk]] / contribs
- Cool. Is she pretty? Wėąṣėḷőįď
Methinks it is a Weasel 19:22, 22 February 2010 (UTC)
Sophia is the subject of many worshipful haiku. Sophia is also the name of the tuber mascot of the Uncyclopedia.— Unsigned, by: 82.198.250.5 / talk / contribs
Rationalwiki:
Difficult haiku
Somewhat irrelevant— Unsigned, by: 82.44.143.26 / talk / contribs
What about Mary Sue? (look her up on the web). — Unsigned, by: 212.85.17.10 / talk / contribs 13:38, 17 March 2010 (UTC)
[edit] Funny interaction at Conservapedia
This discussion has been moved to Conservapedia_Talk:What_is_going_on_at_CP?#Some_everyday_shit_at_Conservapedia 23:17, 7 March 2010 (UTC)
[edit] What Rational Wiki needs
Is more surrealism and irony.
Beware brainwormholes. — Unsigned, by: 82.44.143.26 / talk / contribs 15:09, 22 March 2010 (UTC)
Unblock TehKlown , the ironic surrealist! Alain (talk) 20:49, 5 April 2010 (UTC)
[edit] Homeopathy and Conservapedia
How could the principles of homeopathy ('like cures like') be used on Conservapedia? (This is semi-humorous) 82.44.143.26 (talk) 14:19, 29 March 2010 (UTC) 82.44.143.26 (talk) 14:19, 29 March 2010 (UTC) 82.44.143.26 (talk) 82.44.143.26 (talk) 14:19, 29 March 2010 (UTC) 82.44.143.26 (talk) 14:19, 29 March 2010 (UTC)
[edit] What happened to Trent's ability to restore view counts?
He's done it once or twice before.
Radioactive Misanthrope 21:15, 7 March 2010 (UTC)
- Dunno. He's not been onsite for the last couple of weeks or so. Wėąṣėḷőįď
Methinks it is a Weasel 21:36, 7 March 2010 (UTC)
|
http://rationalwiki.org/w/index.php?title=Talk:Main_Page/Archive23&oldid=579245
|
CC-MAIN-2014-35
|
refinedweb
| 13,594
| 72.46
|
Using Promises In Javascript
A promise is a holder for a result (or an error) that will become available in the future (when the async call returns). Promises have been available in JavaScript through third-party libraries (for example, jQuery and q). ECMAScript 6 adds built-in support for promises to JavaScript.
Today, we will create a simple application called ratefinder that returns a list of available mortgage rates.
Part 1: Use a Promise
To illustrate the use of promises in this example, you use the new
fetch() function. At the time of this writing,
fetch() is available in the latest version of Chrome, Firefox, and Opera, but not in IE and Safari. You can check the current availability of
fetch() here. You can read more about
fetch() here.
- Create a file named
ratefinder.htmlin the
es6-tutorialdirectory. implemented the file as follows:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
</head>
<body>
<table id="rates"></table>
<script src="build/ratefinder.bundle.js"></script>
</body>
</html>
2. Create a file named
ratefinder.js in the
es6-tutorial/js directory. implemented the file as follows:
let url = "rates.json";
fetch(url)
.then(response => response.json())
.then(rates => {
let html = '';
rates.forEach(rate => html += `<tr><td>${rate.name}</td><td>${rate.years}</td><td>${rate.rate}%</td></tr>`);
document.getElementById("rates").innerHTML = html;
})
.catch(e => console.log(e));
3. Open
webpack.config.js in your code editor. In
module.exports, modify the entry and output items as follows:
entry: {
app: './js/main.js',
ratefinder: './js/ratefinder.js'
},
output: {
path: path.resolve(__dirname, 'build'),
filename: '[name].bundle.js'
},
4. On the command line, type the following command to rebuild the application:
npm run webpack
5. Open a browser, access.
Part 2: Create a Promise
Most of the time, all you’ll have to do is use promises returned by built-in or third-party APIs. Sometimes, you may have to create your own promises as well. In this section you create a mock data service to familiarize yourself with the process of creating ECMAScript 6 promises. The mock data service uses an asynchronous API so that it can replace an actual asynchronous data service for test or other purpose.
- Create a new file named
rate-service-mock.jsin the
jsdirectory.
- In rate-service-mock.js.js, define a
ratesvariable with some sample data:
let rates = [
{
"name": "30 years fixed",
"rate": "13",
"years": "30"
},
{
"name": "20 years fixed",
"rate": "2.8",
"years": "20"
}
];
3. Define a
findAll() function implemented as follows:
export let findAll = () => new Promise((resolve, reject) => {
if (rates) {
resolve(rates);
} else {
reject("No rates");
}
});
4. Open
ratefinder.js. Change the implementation as follows:
import * as service from './rate-service-mock';
service.findAll()
.then(rates => {
let html = '';
rates.forEach(rate => html += `<tr><td>${rate.name}</td><td>${rate.years}</td><td>${rate.rate}%</td></tr>`);
document.getElementById("rates").innerHTML = html;
})
.catch(e => console.log(e));
5. On the command line, type the following command to rebuild the application:
npm run webpack
6. Open a browser, access.
|
https://medium.com/@ganihujude_2662/using-promises-in-javascript-2f6c2b1e4e0e
|
CC-MAIN-2017-39
|
refinedweb
| 503
| 60.51
|
Java this Keyword
Example
Using
this with a class attribute (x):
public class Main { int x; // Constructor with a parameter public Main(int x) { this.x = x; } // Call the constructor public static void main(String[] args) { Main myObj = new Main(5); System.out.println("Value of x = " + myObj.x); } }
Definition and Usage
The
this keyword refers to the current object in a method or constructor.
The most common use of the
this keyword is to eliminate
the confusion between class attributes and parameters with the same name (because a class attribute is shadowed by a method or constructor parameter). If you omit the keyword in the example above, the output would be "0" instead of "5".
this can also be used to:
- Invoke current class constructor
- Invoke current class method
- Return the current class object
- Pass an argument in the method call
- Pass an argument in the constructor call
Related Pages
Read more about objects in our Java Classes/Objects Tutorial.
Read more about constructors in our Java Constructors Tutorial.
Read more about methods in our Java Methods Tutorial.
|
https://www.w3schools.com/java/ref_keyword_this.asp
|
CC-MAIN-2021-43
|
refinedweb
| 179
| 52.29
|
Warning: this page refers to an old version of SFML. Click here to switch to the latest version.
Threads
What is a thread?
Most of you should already know what a thread is, however here is a little explanation for those who are really new to this concept.
A thread is basically a sequence of instructions that run in parallel to other threads. Every program is made of at least one thread: the main one, which
runs your
main() function. Programs that only use the main thread are single-threaded, if you add one or more threads they become
multi-threaded.
So, in short, threads are a way to do multiple things at the same time. This can be useful, for example, to display an animation and reacting to user input while loading images or sounds. Threads are also widely used in network programming, to wait for data to be received while continuing to update and draw the application.
SFML threads or std::thread?
In its newest version (2011), the C++ standard library provides a set of classes for threading. At the time SFML was written, the C++11 standard was not written and there was no standard way of creating threads. When SFML 2.0 was released, there were still a lot of compilers that didn't support this new standard.
If you work with compilers that support the new standard and its
<thread> header, forget about the SFML thread classes and use it instead -- it will
be much better. But if you work with a pre-2011 compiler, or plan to distribute your code and want it to be fully portable, the SFML threading
classes are a good solution.
Creating a thread with SFML
Enough talk, let's see some code. The class that makes it possible to create threads in SFML is
sf::Thread, and here is what it
looks like in action:
#include <SFML/System.hpp> #include <iostream> void func() { // this function is started when thread.launch() is called for (int i = 0; i < 10; ++i) std::cout << "I'm thread number one" << std::endl; } int main() { // create a thread with func() as entry point sf::Thread thread(&func); // run it thread.launch(); // the main thread continues to run... for (int i = 0; i < 10; ++i) std::cout << "I'm the main thread" << std::endl; return 0; }
In this code, both
main and
func run in parallel after
thread.launch() has been called. The result is that
text from both functions should be mixed in the console.
The entry point of the thread, ie. the function that will be run when the thread is started, must be passed to the constructor of
sf::Thread.
sf::Thread tries to be flexible and accept a wide variety of entry points: non-member or member functions, with or without
arguments, functors, etc. The example above shows how to use a non-member function, here are a few other examples.
- Non-member function with one argument:
void func(int x) { } sf::Thread thread(&func, 5);
- Member function:
class MyClass { public: void func() { } }; MyClass object; sf::Thread thread(&MyClass::func, &object);
- Functor (function-object):
struct MyFunctor { void operator()() { } }; sf::Thread thread(MyFunctor());
The last example, which uses functors, is the most powerful one since it can accept any type of functor and therefore makes
sf::Thread compatible with many types of functions that are not directly supported. This feature is especially interesting with C++11 lambdas or
std::bind.
// with lambdas sf::Thread thread([](){ std::cout << "I am in thread!" << std::endl; });
// with std::bind void func(std::string, int, double) { } sf::Thread thread(std::bind(&func, "hello", 24, 0.5));
If you want to use a
sf::Thread inside a class, don't forget that it doesn't have a default constructor. Therefore, you have to
initialize it directly in the constructor's initialization list:
class ClassWithThread { public: ClassWithThread() : m_thread(&ClassWithThread::f, this) { } private: void f() { ... } sf::Thread m_thread; };
If you really need to construct your
sf::Thread instance after the construction of the owner object, you can also
delay its construction by dynamically allocating it on the heap.
Starting threads
Once you've created a
sf::Thread instance, you must start it with the
launch function.
sf::Thread thread(&func); thread.launch();
launch calls the function that you passed to the constructor in a new thread, and returns immediately so that the calling thread can
continue to run.
Stopping threads
A thread automatically stops when its entry point function returns. If you want to wait for a thread to finish from another thread, you can call its
wait function.
sf::Thread thread(&func); // start the thread thread.launch(); ... // block execution until the thread is finished thread.wait();
The
wait function is also implicitely called by the destructor of
sf::Thread, so that a thread cannot remain alive
(and out of control) after its owner
sf::Thread instance is destroyed. Keep this in mind when you manage your threads (see the last
section of this tutorial).
Pausing threads
There's no function in
sf::Thread that allows another thread to pause it, the only way to pause a thread is to do it from the
code that it runs. In other words, you can only pause the current thread. To do so, you can call the
sf::sleep function:
void func() { ... sf::sleep(sf::milliseconds(10)); ... }
sf::sleep has one argument, which is the time to sleep. This duration can be given with any unit/precision, as seen in the
time tutorial.
Note that you can make any thread sleep with this function, even the main one.
sf::sleep is the most efficient way to pause a thread: as long as the thread sleeps, it requires zero CPU. Pauses based on active waiting,
like empty
while loops, would consume 100% CPU just to do... nothing. However, keep in mind that the sleep duration is just a hint,
depending on the OS it will be more or less accurate. So don't rely on it for very precise timing.
Protecting shared data
All the threads in a program share the same memory, they have access to all variables in the scope they are in. It is very convenient but also dangerous: since threads run in parallel, it means that a variable or function might be used concurrently from several threads at the same time. If the operation is not thread-safe, it can lead to undefined behavior (ie. it might crash or corrupt data).
Several programming tools exist to help you protect shared data and make your code thread-safe, these are called synchronization primitives. Common ones are mutexes, semaphores, condition variables and spin locks. They are all variants of the same concept: they protect a piece of code by allowing only certain threads to access it while blocking the others.
The most basic (and used) primitive is the mutex. Mutex stands for "MUTual EXclusion": it ensures that only a single thread is able to run the code that it guards. Let's see how they can bring some order to the example above:
#include <SFML/System.hpp> #include <iostream> sf::Mutex mutex; void func() { mutex.lock(); for (int i = 0; i < 10; ++i) std::cout << "I'm thread number one" << std::endl; mutex.unlock(); } int main() { sf::Thread thread(&func); thread.launch(); mutex.lock(); for (int i = 0; i < 10; ++i) std::cout << "I'm the main thread" << std::endl; mutex.unlock(); return 0; }
This code uses a shared resource (
std::cout), and as we've seen it produces unwanted results -- everything is mixed in the console.
To make sure that complete lines are properly printed instead of being randomly mixed, we protect the corresponding region of the code
with a mutex.
The first thread that reaches its
mutex.lock() line succeeds to lock the mutex, directly gains access to the code that follows and prints
its text. When the other thread reaches its
mutex.lock() line, the mutex is already locked and thus the thread is put to sleep
(like
sf::sleep, no CPU time is consumed by the sleeping thread). When the first thread finally unlocks the mutex, the second thread
is awoken and is allowed to lock the mutex and print its text block as well. This leads to the lines of text appearing sequentially in the console instead of being
mixed.
Mutexes are not the only primitive that you can use to protect your shared variables, but it should be enough for most cases. However, if your application does complicated things with threads, and you feel like it is not enough, don't hesitate to look for a true threading library, with more features.
Protecting mutexes
Don't worry: mutexes are already thread-safe, there's no need to protect them. But they are not exception-safe! What happens if an exception is thrown while a mutex is locked? It never gets a chance to be unlocked and remains locked forever. All threads that try to lock it in the future will block forever, and in some cases, your whole application could freeze. Pretty bad result.
To make sure that mutexes are always unlocked in an environment where exceptions can be thrown, SFML provides an RAII class to wrap
them:
sf::Lock. It locks a mutex in its constructor, and unlocks it in its destructor.
Simple and efficient.
sf::Mutex mutex; void func() { sf::Lock lock(mutex); // mutex.lock() functionThatMightThrowAnException(); // mutex.unlock() if this function throws } // mutex.unlock()
Note that
sf::Lock can also be useful in a function that has multiple
return statements.
sf::Mutex mutex; bool func() { sf::Lock lock(mutex); // mutex.lock() if (!image1.loadFromFile("...")) return false; // mutex.unlock() if (!image2.loadFromFile("...")) return false; // mutex.unlock() if (!image3.loadFromFile("...")) return false; // mutex.unlock() return true; } // mutex.unlock()
Common mistakes
One thing that is often overlooked by programmers is that a thread cannot live without its corresponding
sf::Thread instance.
The following code is often seen on the forums:
void startThread() { sf::Thread thread(&funcToRunInThread); thread.launch(); } int main() { startThread(); // ... return 0; }
Programers who write this kind of code expect the
startThread() function to start a thread that will live on its own and be destroyed
when the threaded function ends. This is not what happens. The threaded function appears to block the main thread, as if the thread wasn't working.
What is the cause of this? The
sf::Thread instance is local to the
startThread() function and is therefore immediately destroyed,
when the function returns. The destructor of
sf::Thread is invoked, which calls
wait() as we've learned above, and
the result is that the main thread blocks and waits for the threaded function to be finished instead of continuing to run in parallel.
So don't forget: You must manage your
sf::Thread instance so that it lives as long as the threaded function is supposed to run.
|
https://en.sfml-dev.org/tutorials/2.0/system-thread.php
|
CC-MAIN-2021-49
|
refinedweb
| 1,818
| 72.46
|
On Jul 7, 2012, at 5:46 PM, Nikolay Ivchenkov wrote:
> Are you suggesting to cancel existing guarantees?
I'm suggesting that the LWG had a direction on this and punted simply because it couldn't come up with the right words.
I've requested the LWG re-open LWG 760 and address it. I've also suggested that your question be added to the issue.
> BTW, creating an object at the point of insertion other than end would be tricky: we have to destroy an existing object at that point in the vector and ensure that either the hole will be filled somehow (note that an object construction may fail) or all subsequent items will be removed (I would find such behavior silly).
This is how libc++ implements it:
Anyone have the behavior of VC++ for Nikolay's example?
#include <iostream>
#include <vector>
int main()
{
std::vector<int> v;
v.reserve(4);
v = { 1, 2, 3 };
v.emplace(v.begin(), v.back());
for (int x : v)
std::cout << x << std::endl;
}
Howard
MS VC++ 2010 gives 3 1 2 3
Does there exist an implementation which does something different than libc++ or gcc?
If not, then what user code will be broken if we standardize existing behavior?
What does VC++XX do?
I've convinced myself that I was incorrect about the penalty and that Nikolay's implementation:
On Jul 8, 2012, at 5:23 AM, Nikolay Ivchenkov wrote:
> If the intermediate object would be created at the right moment
>
> _Tp __tmp(std::forward<_Args>(__args)...);
> _Alloc_traits::construct(this->_M_impl, this->_M_impl._M_finish,
> _GLIBCXX_MOVE(*(this->_M_impl._M_finish
> - 1)));
> ++this->_M_impl._M_finish;
> _GLIBCXX_MOVE_BACKWARD3(__position.base(),
> this->_M_impl._M_finish - 2,
> this->_M_impl._M_finish - 1);
> *__position = std::move(__tmp);
imposes no penalty at all (except in an exceptional case which nobody cares about).
On Jul 8, 2012, at 5:10 PM, Nikolay Ivchenkov wrote:
> How soon will the nearest TC come?
I don't think anyone knows at this point. The committee is talking about a 2017 standard (not just a TC).
> std::vector<X> v2;
> v2.emplace(v2.end(), X()); // doesn't compile
> }
This doesn't:
error: call to 'swap' is ambiguous. That error message looks correct to me.
|
https://groups.google.com/a/isocpp.org/g/std-discussion/c/dhy23mDFXj4
|
CC-MAIN-2022-40
|
refinedweb
| 368
| 64.71
|
)'
First Before our plasmoid is created we set our basic counter to -1, in Ruby this automatically means it is created as an integer type variable which can have negative values. The value -1.0 would have created a floating point variable instead. The @ sign in front makes it available outside this init definition..
To include the connectToEngine() defintition in our plasmoid initialization we call it here with nothing between the brackets as we have no values to pass along. It's a nice way to keep our init definition clean looking. Since it does not call to a Qt API we could rename it to anything, connect_to_engine() would be more readable..
With the @y variable we will set the paint height of the digits. By painting them visibly at zero and in the next round outside the visible part of the plasmoid at 200 they will appear to blink as we move them around. Plasma however does not cache the rendered SVGs yet so we are still rendering and repainting each round. This is why many plasmoids get slow when you make them really large, this situation can of course improve in the future.
def paintInterface(painter, option, contentsRect)
puts "ENTER paintInterface, paint height is " + @y.to_s @svg.resize(size()) @svg.paint(painter, 0, 0, "lcd_background")
The paintInterface is another API call.
To see if and when our refresh is actually working we put a string on the terminal output, we also want to know if our counter logic actually works so we tell that to ourselves as well. The puts function is a useful tool to see how well your code is running whenever looking at the plasma repainting is unhelpful. This puts is so fast paced that you may want to comment out the code with a #. Other puts may be kept alive in your own code as it shows you after what point the plasmoid stalls when it unexpectedly does.
The resize function is the signal part of a slot belonging to paintInterface which gets signaled when the plasmoid is being resized. It states "@svg you should resize yourself to the current values of size". No values of size are given as plasma itself updates that variable while the plasmoid is being resized. As we used proper plasma dataEngines our plasmoid gets repainted live while being resized, on decently fast computers that is.
The elements are rendered on top of each other in the listed order and when all are done they are painted as one to the screenbuffer. Try out moving the line of lcd_background further down the list.
....
|
https://techbase.kde.org/index.php?title=Development/Tutorials/Plasma4/Ruby/Blinker&direction=next&oldid=49697
|
CC-MAIN-2018-39
|
refinedweb
| 434
| 69.62
|
Turtle is a render engine mostly used for preparing game assets and baking the textures/lights. Although it may be useful for some people, a large part of the industry does not need it.
The problem is, Turtle nodes are very persistent. Once the plugin activated, it creates a couple of locked nodes which you cannot delete easily. Moreover, if that scene is opened on any other workstation, these nodes forces to load Turtle plugin.
Once the plugin loaded, it stays like that until you exit from Maya. So even if you close the scene and open a new one, since you already activated the Plugin, it will create those persistent nodes any other scene opened with that maya session.
To put that in simple, if its activated once, Turtle nodes spread like a virus in a studio or work group 🙂
There are various ways to get rid of the Turtle. Some of them are permanent.
Below code is a very simple solution to delete the locked turtle nodes. After deleting the nodes it unloads the plugin too. Unless the plugin will continue to create these nodes on each save/open
import pymel.core as pm def killTurtle(): try: pm.lockNode( 'TurtleDefaultBakeLayer', lock=False ) pm.delete('TurtleDefaultBakeLayer') except: pass try: pm.lockNode( 'TurtleBakeLayerManager', lock=False ) pm.delete('TurtleBakeLayerManager') except: pass try: pm.lockNode( 'TurtleRenderOptions', lock=False ) pm.delete('TurtleRenderOptions') except: pass try: pm.lockNode( 'TurtleUIOptions', lock=False ) pm.delete('TurtleUIOptions') except: pass pm.unloadPlugin("Turtle.mll") killTurtle()
|
https://www.ardakutlu.com/maya-tips-tricks-kill-the-turtle/
|
CC-MAIN-2021-31
|
refinedweb
| 247
| 60.61
|
I have a project in NetBeans 7.2/Win7 that builds fine without compile errors.
I just built the project in Eclipse Juno 4.2 and the following error, which went past the NetBeans without complaining:
//...
import com.hedgehog.geo.threed.curves.Ray3D;;
//...
It's simply an extra ";" at the end of an import statement.
But this may not be a bug but something that goes through NetBeans and picked up Eclipse.
Just thought I'd let you know.
True - reading the spec, it really seems semicolons are not allowed between imports. We get the current behaviour from javac - not sure if there will be time to fix that for NetBeans 7.3.
Moving to compiler
Report from old NetBeans version. Due to code changes since it was reported likely not reproducible now. Feel free to reopen if happens in 8.0.2 or 8.1.
|
https://netbeans.org/bugzilla/show_bug.cgi?id=220952
|
CC-MAIN-2018-17
|
refinedweb
| 146
| 86.5
|
Non-Primitive Data types (Referenced Data types) in Java
Non-primitive data types are created by programmers. They are not predefined in java like primitive data types. These data types are used to store a group of values or several values.
For example, we take an array. It can store a group of values. Similarly, another example is a class that can store different values. Therefore, these data types are also known as advanced data types in Java.
When we define a variable of non-primitive data types, it references a memory location where data is stored in the heap memory. That is it references to a memory where an object is actually placed.
Therefore, the variable of a non-primitive data type is also called referenced data type reference data types, an object reference variable ( or simply called reference variable) is declared just like we declare a primitive variable
School sc;
Here, School is the name of a class, and “sc” is the name of a reference variable. No object has yet been created.
We create an object of a class using new keyword. For example, the following statement creates an object of a class School and assigns it to the reference variable “sc”.
sc = new School();
where,
School ➞ name of the class.
sc ➞ Object reference. An object reference is a variable that stores address of an object in the computer’s memory. An object represents an instance through which we can access member.
School() ➞ Constructor of the class. The constructor of a class example program. In this example program, we will get address of the object as output which is stored in object reference variable on the stack memory.
Program source code 1:
package scientecheasy; public class School { // Declaration of a primitive variable. String name = "RSVM"; // Instance variable. public static void main(String[] args) { // Creating an object of the class. School sc = new School(); // sc is Non-primitive data type i.e Object REFERENCE. // Print the address of the memory location of an Object. System.out.println(sc); // Now we cannot access instance variable directly. we call instance variable by using reference variable sc which is created above. System.out.println(sc.name); } }
Output: [email protected] RSVM.
As shown in the above figure, Object reference variable ‘sc’ contains address ‘1db9742’ which is the address of memory location of an object on the heap. On this address, data is stored inside the heap memory.
Creating an object means storing data in memory. So, we can say that “sc” variable does not contain the object. It refers to an object.
Types of Non-primitive Data types in Java
There are five types of non-primitive data types in Java. They are as follows:
1. Class
2. Object
3. String
4. Array
5. Interface
1. Class and objects: Every class is data type and it is also considered as user-defined data types. This is because a user creates a class. For more details: Class and objects in java.
2. String: A string represents a sequence of characters like India, ABC123, etc. The simplest way to create a string object is by storing sequence of characters into string type variable like this:
String str = “Universe”;
Here, string type variable str contains “Universe”. A string is also a class. For more details: String in Java.
3. Array: An array in java is an object which is used to store multiple variables of the same type. These variables can be primitive or non-primitive data type.
The example of declaring an array variable of primitive data type int is as follows:
int [ ] scores;
The example of declaring an array variable of non-primitive data type is
Student [ ] students; // Student is a name of class.
You will learn more details in further tutorials.
4. Interface: An interface is declared like a class but the only difference is that it contains only final variables and method declarations. It is fully abstract class.
Here, we have given just basic knowledge of non-primitive data types in java. You will get more knowledge in further tutorials.
Difference between Primitive and Non-primitive Data type primitive type variables are stored on the stack whereas, for reference types, the stack holds a pointer to the object on the heap.
Final words
Hope that this tutorial has covered almost all the important points related to non-primitive data type in Java with example program. I hope that you will have understood this tutorial and enjoyed it.
Thanks for reading!!!
Next ⇒ Memory allocation of Primitive and Non-Primitive datatypes⇐ PrevNext ⇒
|
https://www.scientecheasy.com/2018/06/non-primitive-data-types-in-java.html/
|
CC-MAIN-2020-24
|
refinedweb
| 759
| 58.18
|
28 January 2011 17:47 [Source: ICIS news]
LONDON (ICIS)--Naphtha oversupply in Europe has pushed down crack spreads and allowed the arbitrage to ?xml:namespace>
With the current low level of buying interest from European petrochemical buyers, naphtha supplies have risen and crack spreads dropped to allow traders to move excess product to
One trader said that the arbitrage to
It was understood that a number of February loading cargoes destined to
The crack spread on Monday 24 January was around minus $2/bbl (minus €1.46/tonne), dropping during the week to around minus $3.70/bbl on Friday 28 January.
The trader said that at one point the crack spread reached minus $4.10/bbl before recovering.
Naphtha flat prices have dropped from a range of $844-852/tonne CIF (cost, insurance, freight) NWE (northwest
The reason why the petrochemical industry was not showing interest was because most companies were using their high inventories of naphtha, the trader said.
Moreover, as the alternative feedstock, butane, dropped in value, petrochemical companies were substituting it for naphtha where possible, said another source.
Naphtha sellers however found extra demand not only from Asia but from gasoline blenders which also allowed the arbitrage
|
http://www.icis.com/Articles/2011/01/28/9430521/europe-naphtha-oversupply-opens-arb-to-asia.html
|
CC-MAIN-2015-22
|
refinedweb
| 202
| 50.06
|
Dmitry Shachnev Also: from __future__ import print_function Also: replace except A, b with except A as b. 2014-01-18T11:05:09+00:00
Also:
Also: replace
except A, bwith
except A as b.
Dmitry Shachnev Exactly!
The python-future does not support Python-3.2 because py32 does not support unicode literal. They said
Adding support for Python 3.2 to future would likely impose a penalty with performance and/or maintainability,..(see also python-future FAQ).
I think we need to decide whether drop Python-3.2 or not. If we decided to drop py32, we may not need to be dependent on six/future.
For that it would be nice to know what
python-futureoffers on top of
six, and if it's worth using it. If yes, I'm not opposed to requiring Python 3.3.
Independent changes like those Dmitry suggested can of course be made straight away.
Looking at future, these can be made automatically using "futurize --stage1".
I've just finished porting Pygments to single-source, and I have two observations:
python-future's conversion tool is nice, but otherwise I prefer six: less magic. future even reimplements the "int" type which is definitely too much compatibility.
The u"" literal from Python 3.3+ is not definitely needed; modules with many literals can use
from __future__ import unicode_literalswhile others can use a
u()wrapper. For Pygments, I did not use
unicode_literalssince it pretty much changes all lexers. Therefore I require 3.3+ there.
I have just created pull request #208 for the first part of refactoring. Please review it carefully, I may have missed some bugs.
I have a question about
tests/etree13. Is it still needed? Can we use
xml.etree.ElementTreeinstead (which is available in Python 2.5+)?
If the tests work with it, yes. Probably they will.
Dropping that will be my next pull request, then.
Re
etree13, the tests work with Python 2 but not Python 3, I get lots of failures like this one:
|
https://bitbucket.org/birkenfeld/sphinx/issue/1350/drop-2to3-mechanism
|
CC-MAIN-2014-15
|
refinedweb
| 336
| 69.99
|
Version: 1.9.1 (using KDE 3.5.1, Gentoo)
Compiler: gcc version 3.4.4 (Gentoo 3.4.4-r1, ssp-3.4.4-1.0, pie-8.7.8)
OS: Linux (i686) release 2.6.14-gentoo-r2
I've upgraded from kde to kde and now I can't select my addressbook from the list, when I try to send an email.
When I will select a recipient (using the select button on right), I only get "All", "Distribution Lists", "Recent Addresses" and "Selected recipients". and I can't find my addressbook.
I have all the addresses when I open the kaddressbook, and kopete is working nice with it, so it's a kmail bug.
Thanks for attention!
I have exactly the same bug with FC4
is there any way to get an higher priority on this bug???
this makes kmail unusable for me :S
and a patch addressing the prob should come out soon for distro integration, since this version isn't in the stable version branch yet.
But i would like to listen the devs opinion.
Thanks for attention!
Bug #121391 is probably a duplicate of this bug. Ok, I can reproduce it, but
there is something really strange with this. Can please somebody try if the
same things happen on his computer?
1st. Make kmail start up to a folder without any messages. (Settings ->
Configure kmail -> Misc -> assign an empty folder)
2nd. Close kmail and restart it again. Press Ctrl-N and select, you should see
all your contacts. Press cancel in the contact windows, and click select again.
All your contacts are present. Cancel again and close the new message.
3rd. Press again Ctrl-N and then select, no contacts anymore :(
4th. Try to change the startup folder to a non-empty folder, and repeat the
same steps, you should not see some contacts anymore if you click select.
Very strange if you ask me. In contrary to what I said in the other bug,
cleaning your profile doesn't work.
I tried to close/open kmail and than I had a windows of opportunities to acces
to my address book. Unfortunately, I can't acces it anymore after that I
oppened a "new mail" windows.
I didn't reset any of my configuration to reproduce this bug, I only close my
kmail client and save my kaddressbook.
Regards
well I've tried the #3 step, but I don't see the addressess, only ALL, not my
addressbook (personal).
Jan's reprocucing scheme tried and working... This is really weird...
4th step is not necessary, since the empty folder stays empty, so I just have to quite and open kmail for it to work again.
*** This bug has been confirmed by popular vote. ***
I also faced this bug and entered an duplicate, which might contain some more
information (). To me it seems like
this appears any time, when you open the composer for the second time. This
includes a running systray applet.
I wrote a e-mail to someone and when I began to write his name the
auto-completion show me the complete mail that is actually in my address book.
I still don't see my contact when I push the "Select" button but the
auto-completion seems to it.
Well, I've tried this version with a new user, and after creating some entries
in the addressbook, I have it on kmail, even after rename the addressbook.
I will now try some to solve the problem.
It works for a new user only.
I've deleted all my .kde dir and when I configure kmail with my email settings I cannot "Select" the addressbook.
that's really strange, because it should work like it does with the new user.
Another weird thing, is the behaviour described by GorDy in #9, wich I confirm.
I'm using KMail as the mail component within Kontact and, no matter what of the
above 'workarounds' I try, I cannot get the address book from the Address Book
to show in KMail.
I removed KDE 3.5.1 because of this and went back to 3.5.0. Now after coming
back I see it is 3.5.1 with Kmail 1.9.1 that is the problem.
My contacts are there. Can see them in Kontact and works with 'dir address book' and 'file address book'. Can also see my contacts when pick the Address Book
from within Kmail. Both show that I have created two types (dir and file) and
that 'dir address book' is active and the Proper Contacts I put there are
shown.
What doesn't work is when Compose email and try to select a contact email
addressbook with 'Select' button in the "To:" line with Composer, the ones
listed in original posters list are the only ones shown. (ie there is no
choice for 'dir address book' or 'file address book')
If pick one of the others can get my email addresses to show, but they're mixed
in with bunch of others like: most recent, all,.....
Going to see if can get my old notes out on how to compile debug into KDE
with Gentoo. Doesn't always work and KDE isn't really helpful when it comes
to debuging for the average person. They are trying to integrate all this into
Kontact, so I don't know who the problem is with: Kontact, Kmail or just the
Composer???
I'm on debian testing, using Kontact, Kmail 1.9.1. Composer select button show
all the address and filter selection only the first time you open it. You have
to close and reopen kontact to see it work again. Address Book module of
kontact works normally.
I've tested the procedure in #14 and still no lucky.
well some other info that I would like to append to my #11:
It only works for the first time I open the composer, because if I close the composer and try again, the addressbook is missing again.
I have this message on konsole when I click select:
kio_file: WARNING: KLocale: trying to look up "" in catalog. Fix the program
but I don't have kmail compiled with debug option.
Using Kontact with debig neabled and gdb, I got the following output (this is
all that is relevant):
kontact (core): Part activated: 0xaafb20 with stack id. 3
kmail: KMailPart::guiActivateEvent
kontact: KMComposeWin::slotUpdateFont
kmail: KMComposeWin::readConfig. Daniel Watkins
kontact: KMComposeWin::slotUpdateFont
kmail: KMComposeWin::rethinkFields
kmail: KMComposeWin::rethinkFields
kmail: [void KMComposeWin::initAutoSave()]
kmail: + Text/Plain
kmail: ObjectTreeParser::parseObjectTree( node OK, showOnlyOneMimePart: FALSE )
kmail: partNode::findType() is looking at Text/Plain
kontact: RecipientsPicker::insertCollection() All index: 0
kontact: RecipientsPicker::insertCollection() Distribution Lists index: 1
kontact: RecipientsPicker::insertCollection() Recent Addresses index: 2
kontact: RecipientsPicker::insertCollection() Selected Recipients index: 3
Don't know how much this helps (feel free to email me with a better way to
obtain output).
Ya, I recompiled kdepim-3.5.1 in gentoo with USE="debug" and FEATURES="nostrip"
and got pretty much the same thing. Which only really confirms that the app
is only inserting what the original poster said.
ie: I only get "All", "Distribution Lists", "Recent Addresses" and "Selected recipients".
It doesn't look like it is erroring out or can't find the 'dir address book'
or 'file address book', I also noticed that 'resource-name' isn't available
like it use to be. So far I haven't been able to find in the code how it
gets them (the missing ones) even in the versions that work. Mine are there
though and good. I am going back to 3.5.0 right now. Hopefully I can debug
enough to see how it retrieves the missing ones. Going to see if can
add more kdDebug()'s to get a better grip on this. I don't get the feeling
it is going to be fixed soon unless we figure out something.
I have been going thru recipientspicker.cpp on 3.5.0 and 3.5.1. Pretty much
the same. They split part of the code in RecipentsPicker::initCollections to
another function Recipients::insertAddressBook and reorganized both a bit.
#include <kabc/stdaddressbook.h> is not included in 3.5.1
They added this to 3.5.1:
{
using namespace KABC;
mAddressBook = KABC::StdAddressBook::self( true );
connect( mAddressBook, SIGNAL( addressBookChanged( AddressBook * ) ),
this, SLOT( insertAddressBook( AddressBook * ) ) );
}
I'm not real good with C/C++ so haven't been able to fix it yet, tried
couple things, got nothing different once, next two times compile errors.
I'm getting short on time, but found this. Looks interesting, explains how
it works:
Not sure how much help it will be though, doesn't look up to date.
Some other stuff to look at that might help:
Here is the most recent diffs, look like the ones I saw in my sdiff view. You
will see some stuff looks removed, but is just moved. Should be pretty much like
I described above:
Sorry, didn't notice this until posted. From what I see in 3.5.0 and everything
works, then in 3.5.1 where some of our address books aren't working correctly.
Looks like the diffs I mention are what was done to fix BUG:
Guessing it caused another bug.
SVN commit 513401 by tilladam:
Fix regression apparently introduced by Bram's fix for #117118, if the
analysis in #121337 is not completely off. Based on a patch by Kanniball
<kanniball@zmail.pt>. Thanks.
BUG: 121337
CCMAIL: kanniball@zmail.pt
CCMAIL: bramschoenmakers@kde.nl
M +2 -0 recipientspicker.cpp
--- branches/KDE/3.5/kdepim/kmail/recipientspicker.cpp #513400:513401
@@ -365,6 +365,8 @@
mAllRecipients->setTitle( i18n("All") );
insertCollection( mAllRecipients );
+ insertAddressBook( mAddressBook );
+
insertDistributionLists();
insertRecentAddresses();
Thanks Till!!
I applied the patch and works great. I thought it looked like mAddressBook
wasn't being called, but my C/C++ is wanting. So didn't know quite how to do it
or wasn't totally sure it wasn't being called and couldn't see it.
Tried 'file address book' and 'dir address book' and both work when select
a recipient. Also, didn't remember putting one of my Family members in Family
'Category' and it is showing up now as a recipient selection. Maybe it was
there before in kmail 1.9 and overlooked it?
Anyway, working great, Thanks a lot!!
Thanks for the heads up.
Thanks much, adding the patch to Kubuntu :)
[beginner]How do you apply the patch?
I think that's quite hard for a beginner, since you need the source code for
this patch.
If you don't mind waiting for a few weeks, you can also upgrade to KDE 3.5.2
which will be released in the end of March.
well, if you don't know how, you should open one bug report at your distro.
I've opened one in gentoo (wich is still open, but I leave an ebuild there), I have one replay from kubuntu saying that the patch has been applied.
The best way is let the distro upgrade it and wait, or otherwise you need to read more, because dealing with the sourcecode in some distros is not an easy task.
*** Bug 121139 has been marked as a duplicate of this bug. ***
*** Bug 122167 has been marked as a duplicate of this bug. ***
*** Bug 123184 has been marked as a duplicate of this bug. ***
*** Bug 120532 has been marked as a duplicate of this bug. ***
Thanks for fixing this bug.
However currently it seems that one minor problem exists. The order of the address books is different the first time you open the address selection dialog than in subsequent uses.
You need to
before you can comment on or make changes to this bug.
|
http://bugs.kde.org/121337
|
crawl-002
|
refinedweb
| 1,965
| 74.49
|
COM
Vasiliki G.Vrana
Technological Educational Institute of Serres
Serres, Greece
Kostas V. Zafiropoulos
University of Macedonia
Thessaloniki, Greece
and
Despoina N. Karystinaiou
University of Piraeus
Athens, Greece
ABSTRACT
logs provide an easy way for an average person to publish material online sharing in this way a huge
amount of knowledge. Travel and tourism blogs provide a new way of sharing tour experiences with an
international audience and act as virtual forms of networking among travelers. TravelPod.com is the most
popular travel blog. The paper aims at exploring whether Thravel. Graph theory indexes are used to
investigate linkage patterns among travelers participating in Travelpod.com.
Keywords: travel blogs; hyperlinks; social networking; links distribution; connectivity
INTRODUCTION
Travel and tourism are two of the most popular subjects in www (Heung, 2003) and blogs have
important implications in this area (Schmallegger & Carson, 2008). Tourism products can hardly be evaluated
prior to their consumption (Rabanser & Ricci, 2005) and depend on accurate and reliable information (Kaldis et
al., 2003) thus elevating the importance of interpersonal influence (Lewis & Chambers 2000). E-word-of-mouth
becomes the most important information source for travel planning (Litvin et al., 2008) mainly because of the
perceived independence of the message source (Akehurst, 2009). Sigala (2007, p.5) mentioned “weblogs have
the power of the impartial information and the e-word-of-mouth that is diffusing online like a virus”.
A number of public travel blog sites have specialized in hosting individual travel blogs. Examples
include travelblog.org, travelpod.com, blog.realtravel.com, yourtraveljournal.com or travelpost.com. Travel
blogs include comments, suggestions, advice, directions, links to related websites hyperlinks, to external
information and links to other travelers. Tourism and travel blogs provide a new way of sharing tour experiences
with an international audience. As all tourism virtual communities, they offer information exchange,
collaboration, knowledge creation purposes and provide value for tourists’ trip planning (Chalkiti & Sigala,
2007). People are also using travel blogs to share experiences with family and friends. Moreover blogs can help
knowledge seeker tourists to get virtual experiences of trips, prepare their trips with confidence and use
subjective data unfiltered and free of marketing bias (Hepburn, 2007) and trend information for their
destinations (Schmallegger & Carson , 2008; Sigala, 2007). Travel blogs provide also geographic information,
as destination websites. However, bloggers provide more authentic information, gained through personal
experience than destination websites which tend to describe only the positive aspects (Sharda & Ponnada, 2007).
Moreover, bloggers trust one another. Kozinets (2002) wrote on this, that people, who interact in spaces like
blogs over a long period of time, trust the opinions of the other users and take them into consideration when
making a purchase decision.
The paper aims at investigating hyperlink connectivity in travel blogs. It uses TravelPod. TravelPod
has been identified as one the most popular travel blog site by many researchers (Carson, 2007; Pan et al.,2007;
140
Schmallegger & Carson, 2008; Wenger; 2007). On November 18th 2008, TravelPod was identified through
Technorati as the 11th among the 100 top blogs having an Authority:9299 and Rank:9, and is the 1st travel blog
in the list. “The discovery of information networks among websites or among site producers through the
analysis of link counts and patterns, and exploration into motivations or contexts for linking, has been a key
issue in this social science literature” (Park & Jankofski 2008, p. 62). The paper aims at exploring whether
Travel.
TravelPod, founded in 1997 as the world's original travel blog. It introduces itself as: “TravelPod's free
travel blog lets you chart your trips on a map, share unlimited photos and videos, and stay in touch while you
travel”. Throught TravelPod travelers can 1.preserve travel memories by uploading photos and videos, chart
trips with travel maps and weave photos directly into stories 2. Get inspired for next trip by meeting other
travelers and participating in travel forums 3. Share experiences with family and friends, by setting up email
import tools, send email updates and RSS feeds and email notifications for new entries 4. Use advanced features
as update travel blogs from mobile phone, track visitors of blogs, show travel blogs on MySpace, Facebook and
other sites send update notifications to Facebook friends and others. In TravelPod each traveler can maintain
many blogs presented at “travelers TravelPod page”. On this page, “Recent Entries”, “Recent Comments”,
“Recent Forum Posts”, “Favorite travelers” and “Others Similar Travelers” are also presented. “Favorite
travelers” is a special form of a blogroll and is a list of travelers that travelers frequently read or admire
particularly. These lists are taken into consideration in this paper in order to investigate connectivity and
communicational patterns between travelers.
BLOGS HYPERLINKING
Drezner & Farrell (2004, p. 5) defined weblogs as “A web page with minimal to no external editing,
providing on-line commentary, periodically updated and presented in reverse chronological order, with
hyperlinks to other online sources”. By definition, blogs link to other sources of information usually to other
blogs. Barger (1997) who first used the term weblog, defined blog as ‘‘a web page where a blogger ‘logs’ all the
other web pages he finds interesting’’. The most important difference between blogs and more traditional media
is that blogs are networked phenomena that rely on hyperlinks (Drezner & Farrell). Park & Jankowski (2008, p.
60) claimed “The configuration of link networks themselves can be a source conveying useful overall
information about the (hidden) online relationship of communication networks in interpersonal, inter-
organisational, and international settings”.
Links between blogs take three forms. The first form is that of a “blogroll” that many bloggers
maintain. The blogroll occupies a permanent position on the blog’s home page (Drezner & Farrell, 2004) and is
the list of blogs that the blogger frequently reads or especially admires. “This form evolved early in the
development of the medium both as a type of social acknowledgement and as a navigational tool for readers to
find other authors with similar interests” wrote Marlow (2004, p.3). Blogrolls provide an excellent means of
situating a blogger’s interests and preferences within the blogosphere. Bloggers are likely to use their blogrolls
to link other blogs that have shared interests” mentioned Drezner & Farell (2004, p.7). Some bloggers have
exhaustive blog rolls, while others are sorted by subject area (Waggener Edstrom Worldwide 2006). Albrecht et
al. (2007, p. 506) referred to this form as “connectedness of weblogs”. The second form is comments.
Comments are “reader-contributed replies to a specific post within the blog” (Marlow, 2004, p.3). In simple
words, comment sections, allow others to post their thoughts, comments and questions about a particular topic.
Comments’ system is implemented as a chronologically ordered set of response and is the key form of
information exchange in the blogosphere (Drezner & Farrell, 2004; Mishne & Glance 2006). “Posting volume
would be a key determinant of content value” claimed Lu and Hsiao (2007, p. 346). At last are trackbacks and
pingbacks. Trackback is a citation notification system (Brady, 2005). It enables bloggers to determine when
other bloggers have written another entry of their own that references their original post (Waggener Edstrom
Worldwide 2006). “If both weblogs are enabled with trackback functionality, a reference from a post on weblog
A to another post on weblog B will update the post on B to contain a back-reference to the post on A” (Marlow
2004). A pingback is an automated trackback. “Pingbacks support auto-discovery where the software
automatically finds out the links in a post, and automatically tries to pingback those URLs, while trackbacks
must be done manually by entering the trackback URL that the trackback should be sent to”
( /Introduction _to_ Blogging#Pingbacks). Blood (2004, p. 55) mentioned trackbacks
‘‘made these formally invisible connections visible’’.
141
There are millions of individual blogs, but within any community only few blogs attract a large
readership (Wagner & Bolloju, 2005). “The vast majority of blogs are probably only read by family and friends,
there are only a few elite blogs which are read by comparably large numbers” wrote Jackson (2006, p.295).
Herring et al. (2004) also claimed that the most discussions of the blogosphere focus on an elite minority of
blogs. These blogs are referred to as "Alist". "A-list blogs—those that are most widely read, cited in the mass
media, and receive the most inbound links from other blogs—are predominantly filter-type blogs, often with a
political focus. The A-list appears at the core of most characterizations of the blogosphere” wrote Herring et al.
(2005).
Many bloggers desire a wide readership and the most reliable way to gain traffic is through a link on
another weblog. Drezner & Farrell (2004, p.7) mentioned “when one blog links to another, the readers of the
former blog are more likely to read the latter after having clicked on a hyperlink than they would have been
otherwise. If they like what they read, they may even become regular readers of the second blog”. Park and
Jankofski (2008,) investigated hyper linking of citizen blogs in South Korean politics and stated that “If there
is an increasing frequency of neighbour links directly flowing through the blog of a politician, it may indicate
the politician’s role as the online community leader as well as the information hub for the community” (p.64).
METHODOLOGY
The paper considers the Top 100 travelers list, according to the number of visits to their website, for
TravelPod and records links from travelers to other travelers within TravelPod. Next, by using snowball
sampling, links from these travelers to new travelers within TravelPod are recorded. Finally, a set of 563
travelers and their incoming links is formed. The recording of travelers and their hyperlinks was done during
December 2008.
The analysis uses Social Networking theory to present and measures travelers’ connectivity and
communicational patterns. Ucinet 6 for Windows is used for the presentation and the analysis of the network.
Analysis is implemented by studying links between travelers, using the “Favourite travelers list” which is
equivalent to blogroll. In order to construct a network, a 563 by 563 non-symmetric binary data matrix (the
adjacency matrix) is used, where unity is placed in cell ij if traveler i links to traveler j through the favourite
travelers list, otherwise zero is placed in the cell. The next step involves the construction of a travelers’
interconnection network. It is a directed graph where travelers are noted as nodes and incoming links as directed
arrows (the edges). This results to a network presented in Figure 1.
Figure 1
Travelers’ Social Network according to incoming links through “favourite travelers”.
The paper reports several graph theoretic indexes in order to describe the specific network’s
connectivity.
142
DISTRIBUTION OF INCOMING AND OUTGOING LINKS
First, the paper examines the distribution of incoming links within the 563 travelers of the study. This
is the distribution of the number of travelers who link to a specific traveller. For example, if the number of
incoming links equals 10 for a specific traveller, this means that ten travelers consider this specific traveler as
one of their favourite travelers, within Travelpod. Figure 2 describes the distribution of incoming links. Most of
the travelers have a very small number of incoming links, while only a few blogs have a big number of
incoming links. The number of incoming links ranges from zero to 30, with mean value 1.84 and standard
deviation 2.44. Travelers have a very low degree of interconnectivity. An index that can help understand the
distribution is skewness. Skewness of incoming links equals 5.157. This means that the histogram of the
distribution has a long right tail. Most travelers have only few incoming links, while only a few of the travelers
have a bigger number of incoming links. In addition, Figure 2 also presents a scaterplot of the ranks of travelers
according to incoming links vs the actual number of incoming links. Ranks of incoming links are calculated by
placing first the largest number. Here the skewness of the distribution is obvious. Travelers whose rank come
next, they have only few incoming links.
Figure 3 shows that skewness is also a property of outgoing links (Figure 3). Outgoing links range from
zero to fifty with mean value 1.81 and standard deviation 4.56. Most travelers have only few outgoing links,
while only a few of the travelers have a bigger number of outgoing links (skewness=5.627). Actualy skewness
of outgoing links is a little bit larger than skewness of incoming links. Figures 4 and 5 respectively display
more analytically the distributions of incoming and outgoing links. From Figures 4 and 5 it is clear that more
than half of the travelers have no outgoing links to favourite travelers, while one forth of the travelers have links
to just one or two favourite travelers. On the other hand, 7.5% of the travelers receive no link, while 61.8%
receive one link and 15% receive two links.
Figure 2
Histogram of travelers’ incoming links-indegrees (left) and scaterplot of travelers’ ranks according to
incoming links vs actual number of incoming links-indegrees(right).
400 30 ,0 0
300
incoming.lin ks
20 ,0 0
Count
200
10 ,0 0
100
0,00
0,00 0 10 0,000 20 0,000 30 0,000 40 0,000 50 0,000
0,00 10,00 20,00 30,00 Rank of incoming.link s
incoming.links
143
Figure 3
Histogram of travelers’ outgoing links-outdegrees (left) and scaterplot of travelers’ ranks according to
outgoing links vs actual number of outgoing links-outdegrees (right).
50,00
400
40,00
300
outgoing.l inks
30,00
Count
200
20,00
100 10,00
0,00
10,00 20,00 30,00 40,00 0,000 100,000 200,000 300,000 400,000
outgoing.links Rank of outgoing.links
Figure 4
A more detailed presentation of the distribution travelers’ incoming links
100
90
80
70
61,8
percentage %
60
50
40
30
20
14,9
10
7,5
4,4 4,3
0 1,8 1,4 0,7 0,4 0,7 0,4 0,4 0,2 0,7 0,4 0,2
0 1 2 3 4 5 6 7 8 9 10 11 12 14 16 30
incoming links
144
Figure 5
A more detailed presentation of the distribution travelers’ outgoing links
100
90
80
70
percentage %
60
56
50
40
30
20 19
10
6,6
3,7 3,7 2 2,1 1,4 0,5 0,9 0,4 0,4 0,4 0,5 0,2 0,7 0,2 0,4 0,2 0,2 0,2 0,2 0,20,2
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 17 18 21 32 34 36 37 50
outgoing links
CENTRALITY
The centrality of a node in a network is a measure of the structural importance of the node. A person's
centrality in a social network affects the opportunities and constraints that they face. In this paper, we consider
two forms of centrality: density, and betweenness.
DENSITY: Degree is simply the number of nodes that a given node is connected to. In general, the greater a
person's degree, the more potential influence they have on the network, and vice-versa. For example, in a
community network, a person who has more connections can spread information more quickly, and will also be
more likely to hear more stuff. The greater a person's degree, the greater the chance that they will catch
whatever is flowing through the network. The “density” of a binary network is the total number of ties divided
by the total number of possible ties. In the case of Travelpod’s travelers density (matrix average) equals 0.0033
or 0.33% with standard deviation 0.0627. It becomes obvious that density is extremely low. The whole network
seems lacking interconnections or there exist just few sparse links between some travelers.
BETWEENNESS: Loosely speaking, betweenness centrality is defined as the number of geodesic paths that
pass through a node. It is the number of "times" that any node needs to go through a given node to reach any
other node via the shortest path. The node with high betweenness can serve as a liaison between disparate
regions of the network. Betweenness is therefore a measure of the number of times a vertex occurs on a
geodesic. The normalized betweenness centrality is the betweenness divided by the maximum possible
betweenness expressed as a percentage. In this specific travelers’ network mean Betweenness equals 0.275 and
its standard deviation equals 1.307. There is a lot of variation of normalized betweenness since standard
deviation is much higher than the mean. This means that there exist some travelers that have the property of
betweenness, while on the other hand there are travellers who do not have this property. Further analysis of the
distribution of normalized betweennes for the 563 travelers, reveals that 40% of them have the property of
beweenness. On the other hand, the rest 60% of the travelers do not have this property. Normalized betweeness
is greater than unity for only 6% of the travelers.
Network Centralization Index can be regarded as a measure for the network to have central points or
areas with greater number of paths than usual. Network Centralization Index equals 18.10%. Despite big
variance of normalized betweenness, this Index it is fairly low. In conclusion, the specific network has only few
central travelers (with regards to their links) and also they do not differ significantly form the rest regarding the
property of centrality and the property of linking to others.
COMPONENTS AND CLIQUES
Cliques and components are sets of travelers that are connected in certain ways. Finding cliques is an attempt to
understand whether travelers within Travelpo.com interact as being linked friends in a social network. For
example, if we think about Facebook or Twitter and other social media, we could understand that one major
145
property of them is that people are self-organized in groups of friends or groups of common personal interests
and characteristics. We do not claim that travelers’ network should perform and be organized as Facebook or
Twitter, but it is interesting to watch to what extend this happens, since travelers are indeed organized within a
certain platform (Travelpo.com) and they do not maintain their own individual blogs, and also they share
common interests or use the same format for building their sites.
COMPONENTS: A connected component is a maximal subgraph in which all nodes are reachable from every
other. In a directed graph two vertices are in the same weak component if there is a semi-path connecting them.
In this directed network, 28 components were found. Regarding the components sizes, one component includes
most of the travelers: 535 travelers (95%), while 26 components have only one traveler, and one component has
two travelers. It can be calculated that fragmentation, that is the proportion of travelers that cannot reach each
other, equals 10%. This number is considered small, and it probably means that in general one user can navigate
through Travelpod using links to favourite travelers, and practically with some effort and time spent; he/she
could visit nearly all the travelers. However, things look a little different when studying cliques.
CLIQUES: For the needs of analysis the original adjacency matrix is symmetrised. This means that we consider
that two travelers are connected if either one links to the other. A clique is a maximally complete subgraph.
Maximal means that it is the largest possible subgraph: you could not find another node anywhere in the graph
which could be added to the subgraph and all the nodes in the subgraph would still be connected. A complete
graph is a simple graph in which every pair of distinct nodes-vertices is connected by an edge.
The number of cliques and the number of travelers who form cliques could be regarded as a measure of
the volume of interconnectivity of the network. It shows how travelers are connected to form small companies
of friends or colleagues, etc. Only 81 travelers out of 563 (14.4%) form 70 cliques. One clique consists of four
travelers, while all the others consist of just three travelers. There are no larger cliques.
The line graph in Figure 6 presents how many travelers take part in several cliques at the same time.
This is a measure of popularity and a measure of interconnection at the same time. Half of the 81 travelers are
taking part in just one clique each. In total one third of the 81 travelers take part in 2, 3, 4 or 5 cliques.
Concluding, we can say that travelers within Travelpod.com are not self-organized in the fashion of
other social media networks. Keeping a directory of friends is not a priority of them. It might be a good word of
advice to travelers’ social media, to try to offer an upgraded environment. This would allow travelers to be
informed of updates of their linked travelers. These updates should be reported to “friends” who are connected
through favourite travelers lists, in the fashion that Facebook and Twitter use to provide updates of the status of
connected friends and followers.
Figure 6
Distribution of travelers according to incorporation to cliques
60
53,1
50
40
percentage %
30
20
17,3
10
8,6
4,9
3,7
2,5 2,5 2,5
1,2 1,2 1,2 1,2
0
In 1 In 2 In 3 In 4 In 5 In 6 In 7 In 8 In 9 In 10 In 11 In 12
clique cliques cliques cliques cliques cliques cliques cliques cliques cliques cliques cliques
146
CO-CITATION
This section studies the way that travelers are co-cited by other travelers. That is how they are linked
simultaneously by one or more travelers within Travelpod.com. Co-citations are used in the sense that they may
be indexes of popularity of the travelers. Or they may be indexes of describing to what degree do travelers
recognize central – core groups of other travelers and in this way they point to them. This is an important
concept since it deals with the property of the networks self realization. That is, the way that travelers have a
view of what happens in the network, who links to whom, who are the most interesting travelers, etc. Let us
consider an example from the study of political blogging. The basic hypothesis supported by the literature
(Drezner and Farrell, 2004) is that within polarized political systems blogs are forming clusters around central
blogs, which are considered reliable or having the same affiliation. Users of the Internet who wish to be
informed quickly, locate the focal points of discussion and for economy of navigation, they read only the posts
on these blogs. Bloggers also locate focal point blogs and place their posts along with a link to their blog. They
expect thus that the readers of focal point blogs will also visit their blogs. For travel blogs, the idea may take a
different form. Travelers or travel blogs are interconnected through blogrolls or “favourite travelers” lists, and in
this way they may form groups of blogs or travelers, which are considered familiar or most important while the
rest of the blogs or travelers are more isolated (Vrana & Zafiropoulos 2009, Zafiropoulos & Vrana 2008).
Co-citations are calculated by multiplying the transposed adjacency matrix by the original adjacency
matrix. The product is a symmetric binary matrix. Its elements in place ij is the number of travelers that link
both travelers I and J. So, for each traveler a series of 563 numbers is produced: it presents the numbers of co-
citations with any other traveler. Obviously, it is very hard to present all these findings. For the purpose of
economy, we use just a part of this data set. For each traveler the maximum co-citation with any other traveler is
chosen and is presented in Figure 7. For example it is interesting that 60.7% of the travelers are not co-cited
with any other traveler, while 26.1% of the travelers are co-cited along with one other traveler. In this fashion, it
becomes obvious that co-citations are rare although that in some rare cases some travelers are co-cited by even
22 other travelers. Co-citations are very low.
Figure 7
Distribution of travelers proportions according to co-citations
70,0
60,0 60,7
50,0
percentage %
40,0
30,0
26,1
20,0
10,0
5,2
3,2 2,3
0,0 0,4 0,7 0,4 0,4 0,4 0,4
0 1 2 3 4 5 6 8 13 15 22
co-cited by
CONCLUSIONS
This paper attempts to present some social networking properties of one of the most popular travel
blogs worldwide. It might be likely that other networks of the same kind would present similar if not the same
properties. The basic research question that this paper strived to answer is whether Travelpod.com presents the
properties that other social media do. Travelpod.com offers a user-friendly environment for travelers to build
their blogs, upload posts and comments and link to other travelers. Although it does offer an advanced
environment, it seems that it does not exploit the possibility of offering all the networking opportunities to its
members. To a certain degree Travelpod.com operates as a collection of independent blogs, providing
additionally a common and user-friendly environment for travellers to build their sites. While it mostly presents
147
the characteristics of a community and additionally it offers a forum for discussion, it offers to a smaller degree
the characteristics of blogs.
Travelpod network presents small density, links are rare, and community characteristics (regarding
hyperlinks of favourite travelers) seem to be sparse. Travepod.com although being one of the top travel
communities and web2.0 tourism-travel applications, to a certain degree is regarded as a collection of rather
independent websites where travelers post their own stories and information waiting for others to locate them
without using any social networking self-organization tools. Travelpod.com and probably other web2.0
applications in tourism and travel, provide the entire necessary infrastructure for the users to act within social
networks. Probably this indeed happens. Travelers may be interconnected through an informal way. They know
other travelers and they upload photos and posts while they comment to each other. However, they use
hyperlinking only to a small degree. In this way, they lack the opportunity of allowing visitors to locate them or
they do not provide the means to Travelpod.com itself to advance its infrastructure. Travelpod (and other travel
social media) could use favourite travelers links to organize information to be offered to groups of friends. It
could provide updates of the friends’ status to the members of the groups of friends. Considering the huge
evolution and success of other social media such as Facebook or Twitter and blogs themselves, one should
consider that it is not that the users are not well versed to operate social networks environments, rather they are
not much interested in forming formal linkage connections with others or it is not expected from them to do so
because the specific social media are constructed under a different rational. Travelers just expect others to
discover them. Concluding, we can say that although travelers’ social media provide significant web2.0
applications, travelers fail to exploit one part of new social media features, the one concerning interlinkages and
connectivity, while it seems that they are very efficient in posting and commenting.
REFERENCES
Akehurst, G. (2009). User generated content: the use of blogs for tourism organisations and tourism consumers
Service Business 3:51-61.
Barger, J. (1997). FAQ: Weblog Resources. Retrieved April, 1, 2009 from Center for History and New Media. digitalhistory /links/ pdf/ chapter1/1.41.pdf [ Accessed the 4th of April 2008,
09:02]
Blood, R. (2004). How blogging software reshapes the online community. Communications of the ACM 47 (12):
53–55.
Brady, M.(2005). Blogging: personal participation in public knowledge building on the web Chimera Working
Paper Number: 2005, 02.-
the-knowledge-society-mb.pdf [Accessed the 31st of March 2008, 07:59]
Carson, D. (2007). The ‘blogosphere’ as a market research tool for tourism destinations: A case study of
Australia’s Northern Territory. Journal of Vacation Marketing, 14(2): 111-119.
Chalkiti, K. & Sigala, M. (2007). Information sharing and idea generation in peer to peer online communities:
The case of ‘DIALOGOI’. Journal of Vacation Marketing, 14(2): 121-132
Drezner, D. & Farrell, H. (2004). The power and politics of blogs. Paper presented at the 2004 Annual Meeting
of the American Political Science Association, Washington, DC, August
/~farrell/ blogpaperfinal.pdf [Accessed the 31st of March 2008, 17:22]
Hepburn, C. (2007). web 2.0 for the tourism and travel industry.
index.php?op=modload&modname=Downloads&action=downloadsviewfile&ctn...el [Accessed the 7th
of January 2009, 00:14]
Herring, S. C., Kouper, I., Scheidt, L. A., & Wright, E. (2004). Women and children last: The discursive
construction of weblogs. In L. Gurak et al. (Eds.), Into the Blogosphere: Rhetoric, Community an
Culture of Weblogs. [Accessed the 12nd
of January 2009, 02:15]
Herring,C., Kouper, I.Paolillo, J., Scheidt, L-A.,Tyworth,M., Welsch, P., Wright, E., and Yu, N. (2005).
Conversations in the Blogosphere: An Analysis "From the Bottom Up" Proceedings of the Thirty-
Eighth Hawai'i International Conference on System Sciences (HICSS-38). Los Alamitos
Heung, V. C. S.(2003). Internet Usage by International Travellers: Reasons and Barriers. International.
Journal of Contemporary Hospitality Management, 15(7):370-378
Jackson, N. (2006). Dipping their big toe into the blogoshpere. The use of weblogs by the political parties in the
2005 general election. Aslib Proceedings: New Information Perspectives, 58(4): 292-303.
Kaldis, K., Boccorh, R., & Buhalis. D.(2003). Technology Enabled Distribution of Hotels. An Investigation of
the Hotel Sector in Athens, Greece. Information and Communication Technologies in Tourism in 2003,
(pp. 280—287) Wien, Springer Verlag
148
Lewis, R.C. and Chambers, R.E. (2000). Marketing Leadership in Hospitality, Foundations and Practices, 3rd
ed. New York: Wiley.
Litvin, S., Goldsmith, R & Pan, B. (2008). Electronic word-of-mouth in hospitality and tourism management.
Tourism management, 29(3): 458-468.
Lu, H-P. & Hsiao, K-L. (2007). Understanding intention to continuously share information on weblogs.
Internet Research, 17(4): 345-361.
Marlow, C. (2004). Audience, structure and authority in the weblog community. In The 54th Annual Conference
of the International Communication Association,
[Accessed the 1st of May 2009, 12:15]
Mishne, G. & Glance, N. (2006). Leave a reply: An analysis of weblog comments. WWW 2006 May 22–26,
2006, Edinburgh, UK /www2006-workshop/papers/wwe2006-
blogcomments.pdf0 [Accessed the 1st of May 2009, 11:22]
Pan, B., MacLaurin, T. and Crotts, J. (2007).Travel Blogs and the Implications for Destination Marketing.
Journal of Travel Research, 46: 35-45.
Rabanser, U. & Ricci, F.(2005). Recommender Systems: Do They Have a Viable Business Model in E-Tourism?
Information and Communication Technologies in Tourism in 2005. (pp. 160-171 Wien, Springer
Verlag.
Park, H-W. and Jankowski, N. (2008). A hyperlink network analysis of citizen blogs in South Korean Politics.
Javnost-the Public, 15(2): 57-74
Sharda, N. and Ponnada, M. (2007). Tourism Blog Visualizer for better tour planning. Journal of Vacation
Marketing, 14(2): 157-167.
Schmallegger, D. & Carson D.(2008). Blogs in tourism: Changing approaches to information exchange. Journal
of Vacation Marketing, 14(2), 99-110.
Sigala, M.(2007). WEB 2.0 in the tourism industry: A new tourism generation and new e-business models,
Ecoclub, 90, 5-8.
Vrana, V. & Zafiropoulos, K. (2009). Exploring conversational patterns in travels blogs. 4th International
Scientific Conference of the University of the Aegean, Planning for the Future - Learning from the Past:
Contemporary Developments in Travel, Tourism Hospitality, Rhodes island, Greece, 3-5 April 2009.
Waggener Edstrom Worldwide (2006). Blogging 101 Understanding the Blogosphere From a Communications
Perspective.-
understanding-the-blogosphere-from-a-communications-persepective.pdf [Accessed the 7th of January
2009, 21:14]
Wagner, C., & Bolloju, N. (2005). Supporting knowledge management in organizations with conversational
technologies: discussion forums, weblogs, and wikis. Journal of Database Management , 16 (2), i–viii.
Wenger, A. (2007). Analysis of travel bloggers’ characteristics and their communication about Austria as a
tourism destination. Journal of Vacation Marketing, 14(2): 169-176.
Zafiropoulos, K. & Vrana, V. (2008). A Social Networking Exploration of Political Blogging in Greece.
Official Proceedings of 1st World Summit on the Knowledge Society, Lytras M. D., Carroll J. M.,
Damiani E., and Tennyson R. D. (Eds). Emerging Technologies and Information Systems for the
Knowledge Society First World Summit, WSKS 2008, Athens, Greece, September 24-26, 2008. Lecture
Notes in Computer Science, LNCS/LNAI 5288 Volume: 573–582, Springer Verlag.
149
|
https://www.scribd.com/document/35503991/HYPERLINK-ANALYSIS-OF-TRAVEL-BLOGS-THE-CASE-OF-TRAVELPOD-COM-EuroChrie-congress-2009-Helsinki
|
CC-MAIN-2019-04
|
refinedweb
| 5,500
| 55.24
|
Building React Apps With Storybook
Storybook is a UI explorer that eases the task of testing components during development. In this article, you will learn what storybook is about and how to use it to build and test React components by building a simple application. We’ll start with a basic example that shows how to work with storybook, then we’ll go ahead to create a storybook for a Table component which will hold students’ data.
Storybook is widely used in building live playgrounds and documenting component libraries, as you have the power to change props values, check loading states amongst other defined functionalities.
You should have basic knowledge of React and the use of NPM before proceeding with this article, as we’ll be building a handful of React components.
Storybook Stories
A story is an exported function that renders a given visual state of a component based on the defined test cases. These stories are saved under the extension
.stories.js. Here is an example story:
import React from 'react'; import Sample from './x'; export default { title: 'Sample story', component: Sample } export function Story(){ return ( <Sample data="sample data" /> ) }
The good part about storybook is that it’s not different from how you typically write React components, as you can see from the example above. The difference here is that alongside the Story component, we are also exporting an object which holds the values of our story title and the component the story is meant for.
Starting Out
Let’s start with building the basic example mentioned above. This example will get us familiar with how to create stories and how the interface of the stories look like. You’ll start by creating the React application and installing Storybook in it.
From your terminal, run the command below:
# Scaffold a new application. npx create-react-app table-component # Navigate into the newly created folder. cd table-component # Initialise storybook. npx -p @storybook/cli sb init
After that, check that the installation was successful by running the following commands:
In one terminal:
yarn start
and in the other:
yarn storybook
You will be greeted by two different screens: the React application and the storybook explorer.
With storybook installed in our applications, you’ll go on to remove the default stories located in
src/stories folder.
Building A Hello world story
In this section, you’ll write your first story, not the one for the table component yet. This story is to explain the concepts of how a story works. Interestingly, you do not need to have React running to work with a story.
Since React stories are isolated React functions, you have to define a component for the story first. In the
src folder, create a components folder and a file
Hello.js inside it, with the content below:
import React from 'react'; export default function Hello({name}) { return ( <p>Hello {name}!, this is a simple hello world component</p> ) }
This is a component that accepts a
name prop, it renders the value of
name alongside some texts. Next, you write the story for the component in
src/stories folder in a file named
Hello.stories.js:
First, you import React and the Hello component:
import React from 'react'; import Hello from '../components/Hello.js';
Next, you create a default export which is an object containing the story title and component:
export default { title: 'Hello Story', component: Hello }
Next, you create your first story:
export function HelloJoe() { return ( <Hello name="Jo Doe" /> ) }
In the code block above, the function
HelloJoe(), is the name of the story, the body of the function houses the data to be rendered in the storybook. In this story, we are rendering the
Hello component with the name “Jo Doe”.
This is similar to how you would typically render the Hello component if you wanted to make use of it in another component. You can see that we’re passing a value for the
name prop which needs to be rendered in the Hello component.
Your storybook explorer should look like this:
The Hello Joe story is listed under the story title and already rendered. Each story has to be exported to be listed in the storybook.
If you create more stories with the title as Hello Story, they will be listed under the title and clicking on each story renders differently. Let’s create another story:
export function TestUser() { return ( <Hello name="Test User" /> ) }
Your storybook explorer should contain two stories:
Some components render data conditionally based on the props value passed to them. You will create a component that renders data conditionally and test the conditional rendering in storybook:
In the
Hello component file, create a new component:
function IsLoading({condition}) { if (condition) { return ( <p> Currently Loading </p> ) return ( <p> Here’s your content </p> ) }
To test the behaviour of your new component, you will have to create a new story for it. In the previous story file,
Hello.stories.js, create a new story:
import Hello, { IsLoading } from '../components/Hello'; export function NotLoading() { return ( <IsLoading loading={false}/> ) } export function Loading() { return ( <IsLoading loading={true} /> ) }
The first story render differs from the second story render as expected. Your storybook explorer should look like this:
You have learnt the basics of creating stories and using them. In the next section, you will build, style and test the main component for this article.
Building A Table Component
In this section, you will build a table component, after which you will write a story to test it.
The table component example will serve as a medium for displaying students data. The table component will have two headings; names and courses.
First, create a new file
Table.js to house the component in the
src/component folder. Define the table component inside the newly created file:
import React from 'react'; function Table({data}) { return () } export default Table
The
Table component takes a prop value
data. This prop value is an array of objects containing the data of students in a particular class to be rendered. Let’s write the table body:
In the return parentheses, write the following piece of code:
<table> <thead> <tr> <th>Name</th> <th>Registered Course</th> </tr> </thead> <tbody> {data} </tbody> </table>
The code above creates a table with two headings, Name and Registered Course. In the table body, the students’ data is rendered. Since objects aren’t valid children in react, you will have to create a helper component to render individual data.
Just after the
Table component, define the helper component. Let’s call it
RenderTableData:
function RenderTableData({data}){ return ( <> {data.map(student => ( <tr> <td>{student.name}</td> <td>{student.course}</td> </tr> ))} </> ) }
In the
RenderTableData component above, the data prop which will be an array of objects will be mapped out and rendered individually as a table data. With the helper component written, update the
Table component body from:
{data}
to
{data ? <RenderTableData data={data} /> : <tr> <td>No student data available</td> <td>No student data available</td> </tr> }
The new block of code renders the student data with the help of the helper component if there’s any data present, otherwise, return “No student data available”.
Before moving on to write a story to test the component, let’s style the table component. Create a stylesheet file,
style.css, in the
components folder:
body{ font-weight: bold; } table { border-collapse: collapse; width: 100%; } table, th, td { border: 1px solid rgb(0, 0, 0); text-align: left; } tr:nth-child(even){ background-color: rgb(151, 162, 211); color: black; } th { background-color: rgba(158, 191, 235, 0.925); color: white; } th, td { padding: 15px; }
With the styling done, import the stylesheet in the component file:
import './style.css'
Next, let’s create two stories to test the behavior of the table component. The first story will have data passed to be rendered and the second won’t.
You can also style the story differently.
In your
stories folder, create a new file
Table.stories.js. Begin by importing react, the table component and defining the story:
import React from 'react'; import Table from '../components/Table'; export default { title: 'Table component', component: Table }
With the story defined, create dummy data for the first story:
const data = [ {name: 'Abdulazeez Abdulazeez', course: 'Water Resources and Environmental Engineering'}, {name: 'Albert Einstein', course: 'Physics'}, {name: 'John Doe', course: 'Estate Managment'}, {name: 'Sigismund Freud', course: 'Neurology'}, {name: 'Leonhard Euler', course: 'Mathematics'}, {name: 'Ben Carson', course: 'Neurosurgery'} ]
Next, you’ll write the first story named
ShowStudentsData:
export function ShowStudentsData() { return ( <Table data={data} /> ) }
Next, head to the storybook explorer tab to check the story. Your explorer should look like this:
You have tested the component with data and it renders perfectly. The next story will be to check the behaviour if there’s no data passed.
Just after the first story, write the second story,
EmptyData:
export function EmptyData(){ return ( <Table /> ) }
The story above is expected to render “No data available”. Head to the storybook explorer to confirm that it renders the accurate message. Your storybook explorer should look like this:
In this section, you have written a table component and a story to test the behaviour. In the next section, you’ll be looking at how to edit data in real time in the storybook explorer using the knobs addon.
Addons
Addons in storybook are extra features that are implemented optionally by the user. These extra features are things that might be necessary for your stories. Storybook provides some core addons but, you can install and even build addons to fit your use case such as decorator addons.
“A decorator is a way to wrap a story in extra ‘rendering’ functionality. Many addons define decorators in order to augment your stories with extra rendering or gather details about how your story is rendered.”
— Storybook docs
Adding Knobs Addon To Our Table Story
The knobs addon is a decorator addon and one of the most used in Storybook. It enables you to change the values (or props) of components without modifying the story function or the component itself.
In this section, you will be adding the knobs addon to our application. The knobs addon eases the stress of having to update the data in your stories manually by setting up a new panel in the storybook explorer where you can easily change the data passed. Without knobs, you’ll have to go back to manually modifying your data.
Doing this would be inefficient and it will defeat the purpose of storybook — especially in cases where those who have access to the stories do not have access to modify the data in the code.
The knobs addon doesn’t come installed with storybook, so you will have to install it as an independent package:
yarn add -D @storybook/addon-knobs
The installation of the addon knobs require that the storybook instance be restarted to be effective. Therefore, stop the current instance of storybook and restart.
Once the addon has been installed, register it under the
addons array in your stories configuration located in
.storybook/main.js.
module.exports = { stories: ['../src/**/*.stories.js'], addons: [ '@storybook/preset-create-react-app', '@storybook/addon-actions', '@storybook/addon-links', '@storybook/addon-knobs' // Add the knobs addon. ], };
With the addon registered, you can now go-ahead to implement the knobs addon in your table story. The student data is of type object, as a result, you will be using the
object type from the
knobs addon.
Import the decorator and the object functions after the previous imports:
import { withKnobs, object } from '@storybook/addon-knobs';
Just after the component field in the default export, add another field:
decorators: [withKnobs]
That is, your story definition object should look like this:
export default { title: 'Table component', component: Table, decorators: [withKnobs] }
The next step is to modify our Table component in the
ShowStudentsData story to allow the use of the
object knob:
before:
<Table data={data}/>
after:
<Table data={object('data', data)}/>
The first parameter in the
object function is the name to be displayed in the knobs bar. It can be anything, in this case, you’ll call it data.
In your storybook explorer, the knobs bar is now visible:
You can now add new data, edit existing ones and delete the data without changing the values in the story file directly.
Conclusion
In this article, you learned what storybook is all about and built a table component to complement the explanations. Now, you should be able to write and test components on the go using storybook.
Also, the code used in this article can be found in this GitHub repository.
External Links
- “Learn Storybook,” official website
- “Storybook,” official website
- “Introduction to Storybook for React,” Storybook v6.0
- “Supercharge Storybook,” Storybook v6.0
- “Decorators,” Storybook v6.0
|
https://www.smashingmagazine.com/2020/09/building-react-apps-storybook/
|
CC-MAIN-2020-40
|
refinedweb
| 2,132
| 50.77
|
All of us are more or less using many of the chat applications available in the market everyday but still sometimes wonder of how it has been developed. This is a simple private chat application developed in DotNet using a web service.
The application (EasyTalk, I named it) targets mainly the beginners who still sometimes fear of using the web service. The audience will get a taste of how web servcie can be easily developed and can be used in a chatting application.
The chat client is developed just like any other chat application using Windows form and the chat service is developed using the Dot Net web service residing in the web server.
The above attached zip file contains the following projects:
1. Chat Service - This is the web service to be deployed in any web server.
2. Chat Client - The is the chat application through which the user will chat.
3. Chat Setup - The distributable setup package.
4. Chat Testing - To is to test the web service.
The ChatService.asmx in the zip file represents the name of the web service in which you will see lot of web methods that will be called by the clients using the application.
[WebService(</a>")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
public class ChatService : System.Web.Services.WebService
{
static protected ArrayList arrUsers = new ArrayList();
static protected ArrayList arrMessage = new ArrayList();
public ChatService () {
//Uncomment the following line if using designed components
//InitializeComponent();
}
Remember these methods are always 'public', otherwise they won't be available for the outside world. You can only set a public method as a web method, otherwise they will be treated as private methods to the web service.
Let us walk through some of these methods:
[WebMethod]
public void AddUser(string strUser)
{
arrUsers.Add(strUser);
}
The AddUser() method will add the user who has logged in the ArrayList being maintained in the web service. By this way the online user list is always updated and will give you the list of users in the room at any point of time.
[WebMethod]
public string GetUsers()
{
string strUser = string.Empty;
for (int i = 0; i < arrUsers.Count; i++)
{
strUser = strUser + arrUsers[i].ToString() + "|";
}
return strUser;
}
The GetUser() method will retrieve the list of users who have logged in the application. This list can be easily available from the Arraylist at any time.
[WebMethod]
public void RemoveUser(string strUser)
{
for (int i = 0; i < arrUsers.Count; i++)
{
if(arrUsers[i].ToString() == strUser)
arrUsers.RemoveAt(i);
}
}
The RemoveUser() method will do the opposite. It will take out the required name out of the Arraylist whenever the user logout from the application.
[WebMethod]
public void SendMessage(string strFromUser, string strToUser, string strMess)
{
arrMessage.Add(strToUser + ":" + strFromUser + ":" + strMess);
}
The SendMessage() method will concatinate the strings (the username to which the message has been sent, the username who has sent the message and the actual message) coming as parameters and will add the new string to another ArrayList. This ArrayList will simply hold all the messages coming from diferent clients intended for different users.
[WebMethod]
public string ReceiveMessage(string strUser)
{
string strMess = string.Empty;
for (int i = 0; i < arrMessage.Count; i++)
{
string[] strTo = arrMessage[i].ToString().Split(':');
if (strTo[0].ToString() == strUser)
{
for (int j = 1; j < strTo.Length; j++)
{
strMess = strMess + strTo[j] + ":";
}
arrMessage.RemoveAt(i);
break;
}
}
return strMess;
}
The ReceiveMessage() method is doing all the tricks. It will filter the messages from the Arraylist and will send each message to its actual recipient.
1. It is taking out the messages one by one from the Arraylist.
2. It is comparing the 'ToUser' name attached with the message with the username from whom the method is requested.
3. If the names are same means the message is intended for that recipient only. It will return the message to that user and remove the message from the arrayList once it is transfered.
The Chat Client is developed using the Windows Forms. You will see two forms by the name Form1 for showing the online user list and PrivateMessage for the chat window.
In this solution we have added the web service as a web reference to this project. So whenever you add the web service that has been deployed in any of your server, the client application will register that path to the server.
The Form1 is the interface that is populating the list of online users logged into the EasyTalk aplication. There is no login form in this application as the username is automatically assigned from the windows login name. The same name will get displayed in the list which on click will open a one to one conversation window to chat with that person. There is a timer attached with this form that will call the GetUser() web method in the web service every 2 seconds to update the list of online users.
1. Whenever any user logs into the application, a notification window will get popup in all the online users machine from the right corner of the screen to show the name of the new online user. [screenshot below]
2. Whenever the user minimises the chat window, the new messeges will start coming to that user in a popup window like the one above from the right corner of the screen like that in gtalk.
3. If the user wishes not to get the messeges in apopup window he can select the checkbox at the bottom of the main window named "Stop Messege Alert service".
The other form is the privateMessage that will allow the user to talk to the other person. There is also a timer attached with this form that will call the ReceiveMessage() method to check whether any new message has arrived from the other side. So if there are 3 windows open for an user, the respective window will look for it's own message through the timer.
In the application, the notifyIcon control is also used so that the user can keep the application running in the system tray. There is a menu associated with the application through which the
One catchy thing that has been used in this application is to blink the chat window whenever any new message comes. The blinking will go on till the user clicks the window to read the message. This is done using Windows API.
[DllImport("user32.dll")]
static extern Int32 FlashWindowEx(ref FLASHWINFO pwfi);
There has been a configuration window to change the server name/ip address. If the web service is deployed in a diferent chat server, only the new ip addreess needs to be supplied to the application through the configuration window by the users.
This is just one way of creating a very simple chat application using a web service. You can create this application using a different approach. You can further procced with this application by creating more functionalities like Public Chatting, Conferences, etc.
Happy Chatting!!!!!!
Keep a running update of any changes or improvements you've made here.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Inconsistent accessibility: parameter type 'ChatClient.IPAddressInfoEventArgs' is less accessible than delegate
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/Articles/22567/Chat-Application-using-Web-services-in-C?display=Print
|
CC-MAIN-2018-51
|
refinedweb
| 1,223
| 62.98
|
Christoph Hellwig wrote:> On Fri, Sep 06, 2002 at 06:15:53PM -0400, Karim Yaghmour wrote:> > +endif> > +> > +ifdef CONFIG_TRACE> > +obj-y += trace.o> > endif> > Please try to understand 2.4/2.5-style Makefile first.Sure, I've been trying to replace this statement for a while, but Ihaven't found an equivalent elsewhere (thought that doesn't meanI haven't missed it; in which case I'd gladly be shown wrong). Theproblem is that whether you set tracing as 'y' or 'm', then coreinfrastructure has to be 'y'. The alternative is to add a "coreinfrastucture" configuration item that can only be 'y' or 'n'. This,however, implies the possibility of having the infrastructure withoutthe driver. I chose not to go down that path because of criticism inthe likes of your other e-mail:> Umm, after LSM we get another bunch of totally undefined hooks??"undefined" is really a stretch. All trace statements are clearlymarked as "TRACE_XYZ". Also, contrary to LSM, trace statements don'tinfluence the kernel's decisions in any way. For all practicalpurposes, they only act as informational markers. In other words,their use to tap into the kernel's behavior is limited, to saythe least.> > +/* Structure packing within the trace */> > +#if LTT_UNPACKED_STRUCTS> > +#define LTT_PACKED_STRUCT> > +#else /* if LTT_UNPACKED_STRUCTS */> > +#define LTT_PACKED_STRUCT __attribute__ ((packed))> > +#endif /* if LTT_UNPACKED_STRUCTS */> > I can't see anything defining LTT_UNPACKED_STRUCTS in this patch.True, and it's done on purpose to avoid having people setting thisunless they really know what they're doing. But if it is preferable,then adding a "#define LTT_UNPACKED_STRUCTS 0" is not a problem.Any preference?> > +int unregister_tracer> > + (tracer_call /* The tracer function */ );> > Did you ever read Documentation/CodingStyle?Yes I did. It just hasn't sinked-in totally yet because old habits diehard. So help me out here, which part of it am I violating from yourpoint of view:- Commenting- Indenting- Formating-?I'm known for having heavily commented code (because that's the way Ilike code), so I'd guess it's the first. If it is, then deleting someof it isn't really a problem.> It would be helpful if you explain what exactly this patch doesm btw.> It's not really obvious from the the patch.I assumed people are fimiliar with what LTT (the Linux Trace Toolkit) is.So here's for those who haven't heard about LTT before:The core tracing infrastructure serves as the main rallying point forall the tracing activity in the kernel. (Tracing here isn't meant inthe ptrace sense, but in the sense of recording key kernel events alongwith a time-stamp in order to reconstruct the system's behavior post-mortem.) Whether the trace driver (which buffers the data collectedand provides it to the user-space trace daemon via a char dev) is loadedor not, the kernel sees a unique tracing function: trace_event().Basically, this provides a trace driver register/unregister service.When a trace driver registers, it is forwarded all the events generatedby the kernel. If no trace driver is registered, then the events gonowhere.In addition to these basic services, this patch allows kernel modulesto allocate and trace their own custom events. Hence, a driver cancreate its own set of events and log them part of the kernel trace.Many existing drivers who go a long way in writing their own tracedriver and implementing their own tracing mechanism should actuallybe using this custom event creation interface. And again, whether thetrace driver is active or even present makes little difference forthe users of the kernel's tracing infrastructure
|
http://lkml.org/lkml/2002/9/6/247
|
CC-MAIN-2015-18
|
refinedweb
| 588
| 55.95
|
Boyer Moore Algorithm in Java
In this tutorial, you will learn what is Boyer Moore Algorithm, what is it used for, and how to implement it in Java Programming Language.
What is the Boyer Moore Algorithm
It is an efficient string searching algorithm used to look for specific patterns in a string or a text file. The name Boyer Moore algorithm was named after the two who developed it i.e. Robert Boyer and J Strother Moore who established it in 1977. This algorithm uses the approach of gathering information during preprocessing to skip some sections of the text to reduce the constant factor. As the pattern length increases, this runs faster. This is said to be the benchmark for all algorithms.
Java Program: Boyer Moore Algorithm
Let us see the program for the algorithm
package javaapplication18; public class JavaApplication18 { static int NO_OF_CHARS = 256; static int max (int a, int b) { return (a > b)? a: b; } static void badCharHeuristic( char []str, int size,int badchar[]) { int i; for (i = 0; i < NO_OF_CHARS; i++) badchar[i] = -1; for (i = 0; i < size; i++) badchar[(int) str[i]] = i; } static void search( char txt[], char pat[]) { int m = pat.length; int n = txt.length; int badchar[] = new int[NO_OF_CHARS]; badCharHeuristic(pat, m, badchar); int s = 0; while(s <= (n - m)) { int j = m-1; while(j >= 0 && pat[j] == txt[s+j]) j--; if (j < 0) { System.out.println("Patterns occur at shift = " + s); s += (s+m < n)? m-badchar[txt[s+m]] : 1; } else s += max(1, j - badchar[txt[s+j]]); } } public static void main(String []args) { char txt[] = "123651266512".toCharArray(); char pat[] = "12".toCharArray(); search(txt, pat); } }
Output:
Patterns occur at shift = 0 Patterns occur at shift = 5 Patterns occur at shift = 10
Also read: What is CountDownLatch in Java
|
https://www.codespeedy.com/boyer-moore-algorithm-in-java/
|
CC-MAIN-2020-45
|
refinedweb
| 301
| 70.13
|
Example program illustrating use of the ns3::Ptr smart pointer. More...
#include "ns3/ptr.h"
#include "ns3/object.h"
#include "ns3/command-line.h"
#include <iostream>
Go to the source code of this file.
Example program illustrating use of the ns3::Ptr smart pointer.
Definition in file main-ptr.cc.
Set
g_ptr to NULL.
Definition at line 87 of file main-ptr.cc.
Example Ptr manipulations.
This function stores it's argument in the global variable
g_ptr and returns the old value of
g_ptr.
g_ptr.
Definition at line 76 of file main-ptr.cc.
Example Ptr global variable.
Definition at line 65 of file main-ptr.cc.
Referenced by ClearPtr(), and StorePtr().
|
https://www.nsnam.org/doxygen/main-ptr_8cc.html
|
CC-MAIN-2020-16
|
refinedweb
| 112
| 56.21
|
In Doodle: Part 1, I introduced the Doodle application; a program for editing drawings. In that article the application was little more than a graphics canvas and an input device. In this article, I add an application framework complete with a document-view architecture.
The document-view model is a powerful application programming technique that separates the document data - the information that will be saved away in a file or database somewhere, from the view - the visual representation of the data that the user can interract with.
At first glance, the demo above looks similar to the demo from the first article, but underneath this user interface things have changed.
The most significant change is the introduction of the Doodle instance in place of where the Canvas was created. The Doodle instance will control all aspects of the application including communication between the document and views.
The Doodle constructor creates a Doodle.Doc instance and one view to the document - the Doodle.Canvas. Notice how the document and view classes Doodle.Doc and Doodle.Canvas have been nested within the Doodle class. What's happening here is that the Doodle class has two extra properties called 'Doc' and 'Canvas' that just happen to also be JavaScript classes. This technique allows all of the Doodle code to be isolated within a namespace called 'Doodle' and won't interfere or interact with other code. This has great advantages if you want to host two or more applications on the same Web page. If two applications running on the same Web page were to use global scope (i.e. the window object) for their variables and classes, it's likely that they would clash over a number of those variable or class names.
At this point, each component has yet to be initialised so the newDocument() function must be called before the application is ready.
Created: March 27, 2003
Revised: Sept 27, 2006
URL:
|
http://www.webreference.com/programming/javascript/gr/column21/
|
crawl-002
|
refinedweb
| 321
| 54.83
|
QML States Controlling
En Ar Bg De El Es Fa Fi Fr Hi Hu It Ja Kn Ko Ms Nl Pl Pt Ru Sq Th Tr Uk Zh
Contents
Introduction
QML offers very powerful qml-qtquick-state to model states and dynamic behaviors. We refer to a state as a collection of parameters describing an entity at a given moment. The behavior of the entity could be described as a sequence of changing states. The model known as Finite State Machine (FSM) is widely used in many domains including the computer sciences also. For a painless introduction see here.
In this article we are analyzing a more complicated state machine model, which has states that are state machine by their self. At the beginning we summarize the QML states constructs and some techniques for their control that we need for implementation. In the main article part we discuss a QML implementation of the considered FSM model. The analysis highlights the QML States/Transition elements also.
The simplest FSM model could be viewed as a sequencer. Starting from an initial state the states are navigated in a linear manner – one by one in preliminary defined ordering. The transition diagram known as state transition diagram is illustrated bellow:
The FSM model we are going to implement supposes that some of states have branches:
Entering such a state causes activating of its internal state machine. Completing this FSM we return to the next state of the FSM at global level. This model could be referred to as a sequencer with nested FSMs.
QML States
More States Definitions
In QML each state is identified by its name, which is of String type. Having several states defined, we could store their names in a variant property like that:
import QtQuick 1.1 Rectangle { id:top1 width: 100 height: 100 color:"red" property variant statesNames: ["state1", "state2"] property int counter:0 states: [ State { name: "state1" PropertyChanges {target:top1; color:"pink"} }, State { name: "state2" PropertyChanges {target:top1; color:"yellow"} } ] Timer { id:zen interval: 2000; running: true; repeat: true onTriggered: { if (counter < 2) { top1.state = statesNames[counter] counter = counter + 1 } else Qt.quit() } } }
If we have actions associated with a state we could use StateChangeScript element, which offers a script block.
Nested variant Types
As we know, variant type acts as a list. Now suppose that an element of a variant property is also of variant type:
Rectangle { width: 360 height: 360 property variant nestedStrings: ["first", "second"] property variant sequence : ["element1", nestedStrings, "element3"] Component.onCompleted: { console.log(sequence[2]) console.log("nested elements", sequence[1][0]) } }
Further, we could define in a variant list definition that an element is also a list using square brackets like that:
property variant listIntoList: [ ["string1"], ["string21", "string22"], ["string3"] ]
The nested list elements are accessed this way:
listIntoList[1][1]
FSM Model Implementation
We are considering a FSM that has 5 states. The states 1, 3 and 5 have no internal states. The states 2 and 4 have internal states – 5 states each.
Definition of States
The states set is defined in a QML element (e.g. Rectangle) starting from first state (its branch if any), second state (its branch if any), etc. The states are members of QML states property. Note that states names are global and visible in the rectangle scope. The goal of this definition is to introduce the states identifiers and actions associated with states (use StateChangeScript element).
Ordering of States
The states names are arranged in a variant type property following the next rules:
- If a state has a branch it is defined in the list as a nested list.
- If a state has no branch it is defined in the list as a nested list with one element only.
- All states (on global level as well as nested ones) are accessed this way:
Property_name [][]
Where the first index controls global states and the second one controls states in the corresponding branch (if any).
property variant stateOrder : [ [state1], [state21, state22], [state3], [state41, state42, state43], [state5] ] stateOrder[1][1] //refers to state22
Implementation Details
The demo code is available here. The code is not optimized to easy explanation of basic ideas. A fragment of implementation follows:
Rectangle { id: top1 width: 500 height: 500 color: "#f9f0e7" property bool branch // Controls if a state has a branch property int currentIndex: 0 // Current index for outer loop property int innerCounter: 0 // Current index for inner (branch) loop // statesParameters property holds two parameters for each state: // a bool value if a state has or has no branch and the number // of states in a branch property variant statesParameters: [[true, 5], [false, 1], [true, 5], [false, 1], [true, 5] ] // Images could be stored in a list property variant imagesList: [ [ "kiparisi/kip1.jpg", "kiparisi/piramidalen.jpg", "kiparisi/spiral.jpg", "kiparisi/tuya.jpg", "kiparisi/septe.jpg" ] ] // A rectangle that contains explanatory text is added Rectangle { id: frame x: 60; y: 60 width: 350 height: 30 color: "white" Rectangle { Text { id: literal text: "This is the initial state. A timer generates state transitions." } } } Rectangle { x: 180; y: 180 Image {id: picture; source: "Qt_logo.jpg"} } property variant stateNames: [ ["state11", "state12", "state13", "state14", "state15"], ["state2"], ["state31", "state32", "state33", "state34", "state35"], ["state4"], ["state51", "state52", "state53", "state54", "state55"] ] property int counter: 0 Timer { id: zen interval: 2000 running: true repeat: true onTriggered: { if(counter<5) branch = statesParameters[counter][0]; else Qt.quit(); if (branch == false) { innerCounter = 0; top1.state = stateNames[counter][innerCounter]; counter = counter + 1; currentIndex = counter; } else { if (innerCounter < statesParameters[counter][1]) { top1.state = stateNames[counter][innerCounter]; innerCounter = innerCounter+1; } else { counter = currentIndex+1; if (counter >= 5) Qt.quit(); } } } } states: [ State { name: "state11" PropertyChanges {target: top1; color: "Snow"} StateChangeScript { name: "stateScript11" script: { literal.text = "Cupressaceae - State 11" picture.source = imagesList[0][2] } } }, State { name: "state12" PropertyChanges {target: top1; color: "Azure"} StateChangeScript { name: "stateScript12" script: { literal.text = "Cupressaceae - State 12" picture.source = imagesList[0][0] } } }, … State { name: "state55" PropertyChanges {target: top1; color: "PeachPuff"} StateChangeScript { name: "stateScript55" script: { literal.text = "Roses - State55" picture.source = "roses/katerach.jpg" } } } ] }
A Timer element is used to initiate the transition from the current state to the next one. Each state is represented visually by different images. The actions performed for each state are included in a StateChangesScript element block. There are three types of actions – changing the color of the frame containing the images, changing the images and altering the explanatory text in the upper text box.
The states are visited one by one. You may change the Timer interval property to control the rate of images rendering. The transition is implemented changing the property state of QML state model.
|
https://wiki.qt.io/index.php?title=QML_States_Controlling&printable=yes
|
CC-MAIN-2020-10
|
refinedweb
| 1,094
| 52.49
|
Royale API TypesRoyale API Types
Type definitions for the Royale API
InstallationInstallation
You can install this package using a package manager like npm:
npm install royale-api-types
Note: This package is based on the latest Node.js LTS version. It may work with older versions, but it is not guaranteed.
DescriptionDescription
This package provides type definitions for the Royale API for use with TypeScript. It also includes all routes from the API that can be used in JavaScript.
UsageUsage
All types from the API are exported as
API*:
import type { APIPlayer } from "royale-api-types"; const player: APIPlayer = { tag: "#22RJCYLUY", name: "D Trombett", // ... };
import type { APIItem } from "royale-api-types"; // Type '{ name: string; id: number; }' is missing the following properties from type 'APIItem': iconUrls, maxLevel const card: APIItem = { name: "Giant", id: 1, };
You can also use the
Routes interface to access the routes.
Note: Tags should be encoded when sending a request to the API.
import { Routes } from "royale-api-types"; console.log(Routes.Clans()); // "/clans" console.log(Routes.Clan("#L2Y2L2PC")); // "/clans/#L2Y2L2PC"
Types are documented by Clash Royale's API documentation. We do our best to keep the types up to date, but we would appreciate any contributions.
This content is not affiliated with, endorsed, sponsored, or specifically approved by Supercell and Supercell is not responsible for it. For more information see Supercell’s Fan Content Policy.
|
https://www.npmjs.com/package/royale-api-types
|
CC-MAIN-2022-40
|
refinedweb
| 229
| 56.25
|
Conditional Branching Statements
While methods branch unconditionally, often you will want to branch within a method depending on a condition that you evaluate while the program is running. This is known as conditional branching. Conditional branching statements allow you to write logic such as “If you are over 25 years old, then you may rent a car.”
C# provides a number of constructs that allow you to write conditional branches into your programs; these constructs are described in the following sections.
if Statements
The simplest branching statement is
if. An
if statement says,
“if a particular condition is true, then execute the
statement; otherwise skip it.” The condition is a
Boolean expression
. An
expression is a statement that evaluates
to a value, and a Boolean expression evaluates to
either true or false.
The formal description of an
if statement is:
if(
expression)
Statement1
This is the kind of description of the
if
statement you are likely to find in your compiler documentation. It
shows you that the
if statement takes an
expression (a statement that returns a value) in parentheses, and
executes Statement1 if the expression evaluates true. Note that
Statement1 can actually be a block of statements within braces, as
illustrated in Example 6-2.
Tip
Anywhere in C# that you are expected to provide a statement, you can instead provide a block of statements within braces. (See the sidebar Brace Styles later in this chapter.)
Example 6-2. The if statement
using System; ...
Get Learning C# now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
|
https://www.oreilly.com/library/view/learning-c/0596003765/ch06s02.html
|
CC-MAIN-2021-21
|
refinedweb
| 270
| 54.63
|
(One of my summaries of a talk at the 2018 european djangocon.)
SQL. The “language to talk to databases in a structured way”.
The ORM. Object Relational Mapper. The magic that makes it all work in django (even though there’s no magic in there).
The talk is about the experience of experienced programmers that, for the first time, have to dive into a django project. She used (“unicodex”) as an example.
So. There’s a missing icon on the sample page. You have to debug/fix that as an experienced-programmer-without-django-experience. You’re used to SQL, but not to the django ORM.
You get a tip “use the shell”. Which shell? “The c shell? bash?”. No, they
mean
manage.py shell. With a bit of
_meta and some copy/paste you can
get a list of the available models.
You could do the same with
manage.py dbshell, which would dump you in the
sql shell. List the databases and you get the same answer. Pick a promising
table and do a
select * from unicodex_codepoint.
You can do the same in the django shell with:
from unicodex.models import Codepoint Codepoint.objects.all()
The rest of the presentation was a nice combination of showing what happens in SQL and what happens when you use the ORM.
Once you get to the double underscores, for following relations and field lookups, the ORM starts to get more useful and easier than raw SQL:
Design.objects.filter(vendorversion__vendor__name__contains='micro')
It was a fun presentation. I can’t really do it justice in a textual summary: you should go and look at the video. It is much more convincing that way. Yeah, the video is already online:
There’s a whole part about
Q() object and the magic ways in which you can
combine it with
~,
&and
|.
How would it translate to SQL? You can show the latest SQL query:
from django.db import connection connection.queries[-1]
At the end of the presentation, she went back to the original usecase and started bughunting with the ORM. In the end it was a bug (a unicode ‘bug’ character) at the end of a filename :-)):
|
https://reinout.vanrees.org/weblog/2018/05/24/03-orm.html
|
CC-MAIN-2022-21
|
refinedweb
| 364
| 76.82
|
On 11/23/2020 3:44 PM, David Mertz wrote:
I).
I just commented on Steve's post over on Discourse. The problem with this is that the called function (m.case, here) needs to have access to the caller's namespace in order to resolve the expressions, such as StringNode and PairNone. This is one of the reasons f-strings weren't implemented as a function, and is also the source of many headaches with string type annotations.
My conclusion is that if you want something that operates on DSLs (especially ones that can't be evaluated as expressions), the compiler is going to need to know about it somehow so it can help you with it. I wish there were a general-purpose mechanism for this. Maybe it's PEP 638, although I haven't really investigated it much, and pattern matching might be a bad fit for it.
Eric
|
https://mail.python.org/archives/list/python-dev@python.org/message/CA6FJTECBEOZFKH3OC4YHT2QJWYKCTW5/
|
CC-MAIN-2021-21
|
refinedweb
| 152
| 70.63
|
Take 40% off Tiny Python Projects by entering fccclark into the discount code box at checkout at manning.com.
It’s not easy to create passwords which are both difficult to guess and easy to remember. An XKCD comic () describes an algorithm which provides both security and recall by suggesting that a password be composed of “four random common words.” For instance, the comic suggests that the password composed of the words “correct,” “horse,” “battery,” and “staple” provides “~44 bits of entropy” which requires around 550 years for a computer to guess given 1,000 guess per second.
We’re going to write a program called
password.py that generates these passwords by randomly combining the words from some input files. Many computers have a file that lists thousands of English words each on a separate line. On most of my systems, I can find this at /usr/share/dict/words, and it contains over 235,000 words! As the file can vary by system, I’ve added a version the repo so that we can use the same file. This file is a little large, so I’ve compressed to inputs/words.txt.zip. You should unzip it before using it:
$ unzip inputs/words.txt.zip
Now we should both have the same inputs/words.txt file so that this is reproducible for you:
$ ./password.py ../inputs/words.txt --seed 14 CrotalLeavesMeeredLogy NatalBurrelTizzyOddman UnbornSignerShodDehort
Well, OK, maybe those aren’t going to be the easiest to remember. Perhaps instead we should be a bit more judicious about the source of our words. The passwords above were created from the default word dictionary
/usr/share/dict/words on my system (which I’ve included in the GitHub repo as
inputs/words.zip). This dictionary lists over 235,000 words from the English language. The average speaker tends to use a small fraction of that, somewhere between 20,000 and 40,000 words.
We can generate more memorable words by drawing from a piece of English text such as the US Constitution. Note that to use a piece of input text in this way, we need to remove all punctuation. We also ignore shorter words with fewer than four characters:
$ ./password.py --seed 8 ../inputs/const.txt DulyHasHeadsCases DebtSevenAnswerBest ChosenEmitTitleMost
Another strategy for generating memorable words could be to limit the pool of words to more interesting parts of speech like nouns, verbs, and adjectives taken from texts like novels or poetry. I’ve included a program I wrote called
harvest.py that uses a Natural Language Processing library in Python called “spaCy” () which extracts those parts of speech into files that we can use as input to our program. I ran the
harvest.py program on some texts and placed the outputs into directories in the GitHub repo.
Here’s the output from the nouns from the US Constitution:
$ ./password.py --seed 5 const/nouns.txt TaxFourthYearList TrialYearThingPerson AidOrdainFifthThing
Here are passwords generated from The Scarlet Letter by Nathaniel Hawthorne:
$ ./password.py --seed 1 scarlet/verbs.txt CrySpeakBringHold CouldSeeReplyRun WearMeanGazeCast
And here are some generated from William Shakespeare’s sonnets:
$ ./password.py --seed 2 sonnets/adjs.txt BoldCostlyColdPale FineMaskedKeenGreen BarrenWiltFemaleSeldom
If this isn’t a strong enough password, we also provide a
--l33t flag to further obfuscate the text by:
- Passing the generated password through the
ransom.pyalgorithm
- Substituting various characters with given table
- Adding a randomly selected punctuation character to the end
Here’s what the Shakespearean passwords look like with this encoding:
$ ./password.py --seed 2 sonnets/adjs.txt --l33t [email protected], f1n3M45K3dK3eNGR33N[ B4rReNW1LTFeM4l3seldoM/
In this exercise, you’ll:
- Take an optional list of input files as positional arguments.
- Use a regular expression to remove non-word characters.
- Filter words by some minimum length requirement.
- Use sets to create unique lists.
- Generate some given number of passwords by combining some given number of randomly selected words.
- Optionally encode text using a combination of algorithms we’ve previously written.
Writing password.py
Our program is called
password.py and creates some
--num number of passwords (default
3) each created by randomly choosing some
--num_words (default
4) from a unique set of words from one or more input files (default
/usr/share/dict/words). As it uses the
random module, the program also accepts a random
--seed argument. The words from the input files need to be a minimum length of some
--min_word_len (default
4) after removing any non-characters.
As always, your first priority is to sort out the inputs to your program. Don’t move ahead until your program can produce this usage with the
-h or
--help flags and can pass the first seven tests:
$ ./password.py -h usage: password.py [-h] [-n num_passwords] [-w num_words] [-m mininum] [-x maximumm] [-s seed] [-l] FILE [FILE ...] Password maker positional arguments: FILE Input file(s) optional arguments: -h, --help show this help message and exit -n num_passwords, --num num_passwords Number of passwords to generate (default: 3) -w num_words, --num_words num_words Number of words to use for password (default: 4) -m mininum, --min_word_len mininum Minimum word length (default: 3) -x maximumm, --max_word_len maximumm Maximum word length (default: 6) -s seed, --seed seed Random seed (default: None) -l, --l33t Obfuscate letters (default: False)
The words from the input files are title-cased (first letter uppercase, the rest lowercased) which we can achieve using the
str.title() method. This makes it easier to see and remember the individual words in the output. Note that we can vary the number of words included in each password as well as the number of passwords generated:
$ ./password.py --num 2 --num_words 3 --seed 9 sonnets/* QueenThenceMasked GullDeemdEven
The
--min_word_len argument helps to filter out shorter, less interesting words like “a,” “an,” and “the.” If you increase this value, then the passwords change quite drastically:
$ ./password.py -n 2 -w 3 -s 9 -m 10 -x 20 sonnets/* PerspectiveSuccessionIntelligence DistillationConscienceCountenance
The
--l33t flag is a nod to “leet”-speak where
31337 H4X0R means “ELITE HACKER”[1]. When this flag is present, we’ll encode each of the passwords, first by passing the word through the
ransom algorithm we wrote:
$ ./ransom.py MessengerRevolutionImportune MesSENGeRReVolUtIonImpoRtune
Then we’ll use the following substitution table to substitute characters:
a => @ A => 4 O => 0 t => + E => 3 I => 1 S => 5
To cap it off, we’ll use
random.choice to select one character from
string.punctuation to add to the end:
$ ./password.py --num 2 --num_words 3 --seed 9 --min_word_len 10 --max_word_len 20 sonnets/* --l33t p3RsPeC+1Vesucces5i0niN+3lL1Genc3$ [email protected][email protected]^
Here’s the string diagram to summarize the inputs:
Creating a unique list of words
Let’s start off by making our program print the name of each input file:
def main(): args = get_args() random.seed(args.seed) ❶ for fh in args.file: ❷ print(fh.name) ❸
❶ Always set
random.seed right away as it globally affects all actions by the
random module.
❷ Iterate through the file arguments.
❸ Print the name of the file.
We can run it with the default:
$ ./password.py ../inputs/words.txt ../inputs/words.txt Or with some of the other inputs: $ ./password.py scarlet/* scarlet/adjs.txt scarlet/nouns.txt scarlet/verbs.txt
Our first goal is to create a unique list of words we can use for sampling. The elements in a
list don’t have to be unique, and we can’t use it. The keys of a dictionary are unique and this is a possibility:
def main(): args = get_args() random.seed(args.seed) words = {} ❶ for fh in args.file: ❷ for line in fh: ❸ for word in line.lower().split(): ❹ words[word] = 1 ❺ print(words)
❶ Create an empty
dict to hold the words.
❷ Iterate through the files.
❸ Iterate through the lines of the file.
❹ Lowercase the line and split it on spaces into words.
❺ Set the key
words[word] equal to
1 to indicate we saw it. We’re only using a
dict to get the unique keys. We don’t care about the values, and you could use whatever value you like.
If you run this on the US Constitution, you should see a fairly large list of words (some output elided here):
$ ./password.py ../inputs/const.txt {'we': 1, 'the': 1, 'people': 1, 'of': 1, 'united': 1, 'states,': 1, ...}
I can spot one problem in that the word
'states,' has a comma attached to it. If we try in the REPL with the first bit of text from the Constitution, we can see the problem:
>>> 'We the People of the United States,'.lower().split() ['we', 'the', 'people', 'of', 'the', 'united', 'states,']
How can we get rid of punctuation?
Cleaning the text
We’ve seen several times that splitting on spaces leaves punctuation, but splitting on non-word characters can break contracted words like “Don’t” in two. I’d like to create a function that
cleans a word. First, I’ll imagine the test for it. Note that in this exercise, I’ll put all my unit tests into a file called
unit.py which I can run with
pytest -xv unit.py.
Here’s the test for our
clean function:
def test_clean(): assert clean('') == '' ❶ assert clean("states,") == 'states' ❷ assert clean("Don't") == 'Dont' ❸
❶ It’s always good to test your functions on nothing to make sure it does something sane.
❷ The function should remove punctuation at the end of a string.
❸ The function shouldn’t split a contracted word in two.
I would like to apply this to all the elements returned by splitting each line into words, and
map is a fine way to do this. We often use a
lambda when writing
map:
Notice that I don’t need to write a
lambda for the
map because the
clean function expects a single argument:
See how it integrates with the code:
def main(): args = get_args() random.seed(args.seed) words = {} for fh in args.file: for line in fh: for word in map(clean, line.lower().split()): ❶ words[word] = 1 print(words)
❶ Use
map to apply the
clean function to the results of splitting the
line on spaces. No
lambda is required because
clean expects a single argument.
If I run this on the US Constitution again, I see that
'states' has been fixed:
$ ./password.py ../inputs/const.txt {'we': 1, 'the': 1, 'people': 1, 'of': 1, 'united': 1, 'states': 1, ...}
I’ll leave it to you to write the
clean function which satisfies the test.
Using a
set
A better data structure than a
dict to use for our purposes is called a
set, and you can think of it like a unique
list or the keys of a
dict. Here’s how we could change our code to use a
set to keep track of unique words:
def main(): args = get_args() random.seed(args.seed) words = set() ❶ for fh in args.file: for line in fh: for word in map(clean, line.lower().split()): words.add(word) ❷ print(words)
❶ Use the
set function to create an empty set.
❷ Use
set.add to add a value to a set.
If you run this code now, you’ll see a slightly different output where Python shows you a data structure in curly brackets (
{}) that makes you think of a
dict but you’ll notice that the contents look more like a
list:
$ ./password.py ../inputs/const.txt {'', 'impartial', 'imposed', 'jared', 'levying', ...}
We’re using sets here only for the fact that they easily allow us to keep a unique list of words, but sets are much more powerful than this. For instance, you can find the shared values between two lists by using the
set.intersection method:
>>> nums1 = set(range(1, 10)) >>> nums2 = set(range(5, 15)) >>> nums1.intersection(nums2) {5, 6, 7, 8, 9}
You can read
help(set) in the REPL or the documentation online to learn about all the amazing things you can do with sets.
Filtering the words
If we look again at the output, we’ll see that the empty string is the first element:
$ ./password.py ../inputs/const.txt {'', 'impartial', 'imposed', 'jared', 'levying', ...}
We need a way to filter out unwanted values like strings which are too short. In the “Rhymer” exercise, we looked at the
filter function which is a higher-order function that takes two arguments:
- A function that accepts one element and returns
Trueif the element should be kept or
Falseif the element should be excluded.
- Some “iterable” (like a
listor
map) that produces a sequence of elements to be filtered.
In our case, we want to accept only words that have a length greater or equal to the
--min_word_len argument. In the REPL, I can use a
lambda to create an anonymous function which accepts a
word and then compares that word’s length to a
min_word_len. The result of that comparison is either
True or
False. Only words with a length of
4 or greater are allowed through, and this has the effect of removing words like them empty string or the English articles. Remember that
filter is lazy, and I coerce it using the
list function in the REPL to see the output:
>>> shorter = ['', 'a', 'an', 'the', 'this'] >>> min_word_len = 3 >>> max_word_len = 6 >>> list(filter(lambda word: min_word_len <= len(word) <= max_word_len, shorter)) ['the', 'this'] It will also remove longer words: >>> longer = ['that', 'other', 'egalitarian', 'disequilibrium'] >>> list(filter(lambda word: min_word_len <= len(word) <= max_word_len, longer)) ['that', 'other']
One way we could incorporate the filter() is to create a word_len() function that encapsulates the above lambda. Note that I defined it inside the main() in order to create a closure because I want to include the values of args.min_word_len and args.max_word_len:
def main(): args = get_args() random.seed(args.seed) words = set() def word_len(word): ❶ return args.min_word_len <= len(word) <= args.max_word_len for fh in args.file: for line in fh: for word in filter(word_len, map(clean, line.lower().split())): ❷ words.add(word) print(words)
❶ This function will return True if the length of the given word is in the allowed range.
❷ We can use word_len (without the parentheses!) as the function argument to filter().
We can again try our program to see what it produces:
$ ./password.py ../inputs/const.txt {'measures', 'richard', 'deprived', 'equal', ...}
Try it on multiple inputs such as all the nouns, adjectives, and verbs from The Scarlet Letter:
$ ./password.py scarlet/* {'walk', 'lose', 'could', 'law', ...}
Titlecasing the words
We used the
line.lower() function to lowercase all the input, but the passwords we generate need each word to be in “Title Case” where the first letter is uppercase and the rest of the word is lower. Can you figure out how to change the program to produce this output?
$ ./password.py scarlet/* {'Dark', 'Sinful', 'Life', 'Native', ...}
Now we have a way to process any number of files to produce a unique list of title-cased words which have non-word characters removed and have been filtered to remove the ones which are too short.
Sampling and making a password
We’re going to use the
random.sample() function to randomly choose some —
num number of words from our set to create an unbreakable yet memorable password. We’ve talked before about the importance of using a random seed to test that our “random” selections are reproducible. It’s also quite important that the items from which we sample always be ordered in the same way so that the same selections are made. If we use the
sorted() function on a set, we get back a sorted list which is perfect for using with
random.sample(). I can add this line to the code from before:
words = sorted(words) print(random.sample(words, args.num_words))
Now when I run it with The Scarlet Letter input, I will get a list of words that might make an interesting password:
$ ./password.py scarlet/* ['Lose', 'Figure', 'Heart', 'Bad']
The result of
random.sample() is a list that you can join on the empty string in order to make a new password:
>>> ''.join(random.sample(words, num_words)) 'TokenBeholdMarketBegin'
You will need to create
args.num of passwords. How will you do that?
l33t-ify
The last piece of our program is to create a
l33t function which obfuscates the password. The first step is to convert it with the same algorithm we wrote for
ransom.py. I’m going to create a
ransom function for this, and here’s the test which is in
unit.py. I’ll leave it to you to create the function that satisfies this test[2]:
def test_ransom(): state = random.getstate() random.seed(1) assert (ransom('Money') == 'moNeY') assert (ransom('Dollars') == 'DOLlaRs') random.setstate(state)
❶ Save the current global state.
❷ Set the
random.seed() to a known value for the test.
❸ Restore the state.
Next, I substitute some of the characters according to the following table:
a => @ A => 4 O => 0 t => + E => 3 I => 1 S => 5
I wrote a
l33t function which combines the
ransom with the substitution above and finally adds a punctuation character by appending
random.choice(string.punctuation). Here’s the
test_l33t function you can use to write your function:
def test_l33t(): state = random.getstate() random.seed(1) assert l33t('Money') == 'moNeY{' assert l33t('Dollars') == 'D0ll4r5`' random.setstate(state)
Putting it all together
Without giving away the ending, I’d like to say that you need to be careful about the order of operations that include the
random module. My first implementation printed different passwords given the same seed when I used the
--l33t flag. Here was the output for plain passwords:
$ ./password.py -s 1 -w 2 sonnets/* EagerCarcanet LilyDial WantTempest
I expected the exact same passwords only encoded. Here’s what my program produced instead:
$ ./password.py -s 1 -w 2 sonnets/* --l33t [email protected]@[email protected]+{ m4dnes5iNcoN5+4n+| MouTh45s15T4nCe^
The first password looks OK, but what are those other two? I modified my code to print both the original password and the l33ted one:
$ ./password.py -s 1 -w 2 sonnets/* --l33t [email protected]@[email protected]+{ (EagerCarcanet) m4dnes5iNcoN5+4n+| (MadnessInconstant) MouTh45s15T4nCe^ (MouthAssistance)
The
random module uses a global state to make each of its “random” choices. In my first implementation, I modified this state after choosing the first password by immediately modifying the new password with the
l33t function. Because the
l33t function also uses
random functions, the state was altered for the next password. The solution was to first generate all the passwords and then to
l33t them, if necessary.
Those are all the pieces you should need to write your program. You have the unit tests to help you verify the functions, and you have the integration tests to ensure your program works as a whole. This is the last program; give it your best shot before looking at the solution!
Solution
#!/usr/bin/env python3 """Password maker,""" import argparse import random import re import string # -------------------------------------------------- def get_args(): """Get command-line arguments""" parser = argparse.ArgumentParser( description='Password maker', formatter_class=argparse.ArgumentDefaultsHelpFormatter) parser.add_argument('file', metavar='FILE', type=argparse.FileType('r'), nargs='+', help='Input file(s)') parser.add_argument('-n', '--num', metavar='num_passwords', type=int, default=3, help='Number of passwords to generate') parser.add_argument('-w', '--num_words', metavar='num_words', type=int, default=4, help='Number of words to use for password') parser.add_argument('-m', '--min_word_len', metavar='mininum', type=int, default=3, help='Minimum word length') parser.add_argument('-x', '--max_word_len', metavar='maximumm', type=int, default=6, help='Maximum word length') parser.add_argument('-s', '--seed', metavar='seed', type=int, help='Random seed') parser.add_argument('-l', '--l33t', action='store_true', help='Obfuscate letters') return parser.parse_args() # -------------------------------------------------- def main(): args = get_args() random.seed(args.seed) ❶ words = set() ❷ def word_len(word): ❸ return args.min_word_len <= len(word) <= args.max_word_len for fh in args.file: ❹ for line in fh: ❺ for word in filter(word_len, map(clean, line.lower().split())): ❻ words.add(word.title()) ❼ words = sorted(words) ❽ passwords = [ ❾ ''.join(random.sample(words, args.num_words)) for _ in range(args.num) ] if args.l33t: ❿ passwords = map(l33t, passwords) ⓫ print('\n'.join(passwords)) ⓬ # -------------------------------------------------- def clean(word): ⓭ """Remove non-word characters from word""" return re.sub('[^a-zA-Z]', '', word) ⓮ # -------------------------------------------------- def l33t(text): ⓯ """l33t""" text = ransom(text) ⓰ xform = str.maketrans({ ⓱ 'a': '@', 'A': '4', 'O': '0', 't': '+', 'E': '3', 'I': '1', 'S': '5' }) return text.translate(xform) + random.choice(string.punctuation) ⓲ # -------------------------------------------------- def ransom(text): ⓳ """Randomly choose an upper or lowercase letter to return""" return ''.join( ⓴ map(lambda c: c.upper() if random.choice([0, 1]) else c.lower(), text)) # -------------------------------------------------- if __name__ == '__main__': main()
❶ Set the
random.seed to the given value or the default
None which is the same as not setting the seed.
❷ Create an empty
set to hold all the unique of words we’ll extract from the texts.
❸ Iterate through each open file handle.
❹ Iterate through each line of text in the file handle.
❺ Iterate through each word generated by splitting the line on spaces, removing non-word characters with the
clean function, and filtering for words greater or equal in length to the given minimum.
❻ Titlecase the word before adding it to the set.
❼ Use the
sorted function to order
words into a new
list.
❽ Initialize an empty
list to hold the
passwords we create.
❾ Use a
for loop with a
range to create the correct number of passwords. Because I don’t need the value from
range, I can use the
_ to ignore the value.
❿ Make a new password by joining a random sampling of words on the empty string.
⓫ Now that all the passwords have been created, it’s safe to call the
l33t function if required. If we had used it in the above loop, itwould’ve altered the global state of the
random module and we’d have gotten different passwords.
⓬ If the
l33t flag is present, obfuscate the password; otherwise, print it as-is.
⓭ Define a function to “clean” a word.
⓮ Use a regular expression to substitute the empty string for anything which isn’t an English alphabet character.
⓯ Define a function to
l33t a word.
⓰ First use the
ransom function to randomly capitalize letters.
⓱ Make a translation table/
dict for character substitutions.
⓲ Use the
str.translate function to perform the substitutions, append a random piece of punctuation.
⓳ Define a function for the
ransom algorithm.
⓴ Return a new string created by randomly upper- or lowercasing each letter in a word.
Discussion
Well, that was it. The last exercise! I hope you found it challenging and fun. Let’s break it down a bit. Nothing new was in
get_args; let’s start with the auxiliary functions:
Cleaning the text
I chose to use a regular expression to remove any characters that are outside the set of lowercase and uppercase English characters:
def clean(word): """Remove non-word characters from word""" return re.sub('[^a-zA-Z]', '', word) ❶
❶ The
re.sub function substitutes any text matching the pattern (the first argument) found in the given text (the third argument) with the value given by the second argument.
Recall from the “Gematria” exercise that we can write the character class
[a-zA-Z] to define the characters in the ASCII table bounded by those two ranges. We can then negate or complement that class by placing a caret
^ as the first character inside that class, and it
[^a-zA-Z] can be read as “any character not matching a to z or A to Z.”
It’s perhaps easier to see it in action in the REPL. In this example, only the letter “AbCd” is left from the text “A1b*C!d4”:
>>> import re >>> re.sub('[^a-zA-Z]', '', 'A1b*C!d4') 'AbCd'
If the only goal were to match ASCII letters, it’s possible to solve it by looking for membership in
string.ascii_letters:
>>> import string >>>>> [c for c in text if c in string.ascii_letters] ['A', 'b', 'C', 'd']
It honestly seems like more effort to me. Besides, if the function needed to be changed to allow, say, numbers and a few specific pieces of punctuation, then the regular expression version becomes significantly easier to write and maintain.
A king’s ransom
The
ransom function was taken straight from the
ransom.py program, and there isn’t too much to say about it except, hey, look how far we’ve come! What was an entire idea for a article is now a single line in a much longer and more complicated program:
def ransom(text): """Randomly choose an upper or lowercase letter to return""" return ''.join( ❶ map(lambda c: c.upper() if random.choice([0, 1]) else c.lower(), text)) ❷
❶ Use
map iterate through each character in the
text and select either the upper- or lowercase version of the character based on a “coin” toss using
random.choice to select between a “truthy” value (
1) or a “falsey” value (
0).
❷ Join the resulting
list from the
map on the empty string to create a new
str.
How to
l33t
The
l33t function builds on the
ransom and then adds a text substitution. I like the
str.translate version of that program, and I used it again here:
def l33t(text): """l33t""" text = ransom(text) ❶ xform = str.maketrans({ ❷ 'a': '@', 'A': '4', 'O': '0', 't': '+', 'E': '3', 'I': '1', 'S': '5' }) return text.translate(xform) + random.choice(string.punctuation) ❸
❶ First randomly capitalize the given
text.
❷ Make a translation table from the given
dict which describes how to modify one character to another. Any characters not listed in the keys of this
dict are ignored.
❸ Use the
str.translate method to make all the character substitutions. Use
random.choice to select one additional character from
string.punctuation to append to the end.
Processing the files
Now to apply these to the processing of the text. To use these, we need to create a unique set of all the words in our input files. I wrote this bit of code both with an eye on performance and for style:
words = set() for fh in args.file: for line in fh: for word in filter(word_len, map(clean, line.lower().split())): words.add(word.title())
❶ Iterate through each open file handle.
❷ Read the file handle line-by-line with a for loop, not with a method like fh.read() which will read the entire contents of the file at once.
❸ Reading this code actually requires starting at the end where we split the line.lower() on spaces. Each word from str.split() goes into clean() which then must pass through the filter() function.
❹ Titlecase the word before adding it to the set.
Here’s a diagram of that
for line:
line.lower()will return a lowercase version of
line.
- The
str.split()method will break the text on whitespace to return words.
- Each word is fed into the
clean()function to remove any character that is not in the English alphabet.
- The cleaned words are filtered by the
word_len()function.
- The resulting
wordhas been transformed, cleaned, and filtered.
If you don’t like the
map and
filter functions, rewrite the code in a more traditional way:
words = set() for fh in args.file: ❶ for line in fh: ❷ for word in line.lower().split(): ❸ word = map(clean) ❹ if args.min_word_len <= len(word) <= args.max_word_len: ❺ words.add(word.title() ❻
❶ Iterate through each open file handle.
❷ Iterate through each line of the file handle.
❸ Iterate through each “word” from splitting the lowercased line on spaces.
❹ Clean the word up.
❺ If the word is long enough,
❻ Then add the titlecased word to the set.
Whichever way you choose to process the files, at this point you should have a complete
set of all the unique, titlecased words from the input files.
Sampling and creating the passwords
As noted above, it’s vital to sort the
words for our tests to verify that we’re making consistent choices. If you only wanted random choices and didn’t care about testing, you don’t need to worry about sorting – but then you’d also be a morally deficient person for not testing – perish the thought! I chose to use the
sorted function as there’s no other way to sort a
set:
words = sorted(words) ❶
❶
Because there’s no
set.sort function, sets are ordered internally by Python. Calling
sorted on a
set creates a new, sorted
list.
We need to create some given number of passwords, and I thought it might be easiest to use a
for loop with a
range. In my code, I used
for _ in range(…) because I don’t need to know the value each time through the loop. The
_ is a way to indicate that you’re ignoring the value. It’s fine to say
for i in range(…) if you want, but some linters might complain if they see that your code declares the variable
i but never uses it. That could legitimately be a bug, and it’s best to use the
_ to show that you mean to ignore this value.
Here’s the first way I wrote the code that led to the bug I mentioned in the discussion where different passwords are chosen even when I use the same random seed. Can you spot the bug?
for _ in range(args.num): ❶ password = ''.join(random.sample(words, args.num_words)) ❷ print(l33t(password) if args.l33t else password) ❸
❶ Iterate through the
args.num of passwords to create.
❷ Each password is based on a random sampling from our
words, and we choose the value given in
args.num_words. The
random.sample function returns a
list of words that we
join on the empty string to create a new string.
❸ If the
args.l33t flag is
True, then we’ll print the l33t version of the password; otherwise, we’ll print the password as-is. This is the bug! Calling
l33t here modifies the global state used by the
random module, and the next time we call
random.sample we get a different sample.
The solution is to separate the concerns of generating the passwords and possibly modify them:
passwords = [ ❶ ''.join(random.sample(words, args.num_words)) for _ in range(args.num) ] if args.l33t: ❷ passwords = map(l33t, passwords) print('\n'.join(passwords)) ❸
❶ Use a list comprehension iterate through range(args.num) to generate the correct number of passwords.
❷ If the args.leet flag is True, then use the l33t() function to modify the passwords.
❸ Print the passwords joined on newlines.
I’ll leave you with the following thought:
Any code of your own that you haven’t looked at for six or more months might as well have been written by someone else. – Eagleson’s Law
Review
This exercise kind of has it all. Validating user input, reading files, using a new data structure in the
set, higher-order functions with
map and
filter, random values, and lots of functions and tests! I hope you enjoyed programming it, and maybe you’ll even use the program to generate your new passwords. Be sure to share those passwords with your author, like the ones to your bank account and favorite shopping sites!
Going Further
- The substitution part of the
l33tfunction changes every available character which perhaps makes the password too difficult to remember. It would be better to modify only maybe 10% of the password.
- Create programs that combine other skills you’ve learned. Like maybe a lyrics generator that randomly selects lines from a files of songs by your favorite bands, then encodes the text with the “Kentucky Friar,” then changes all the vowels to one vowel with “Apples and Bananas,” and then SHOUTS IT OUT with “The Howler”?
Congratulations, you are now 733+ HAX0R!
That’s all for this article. If you want to see more, you can preview the book’s contents on our browser-based liveBook reader here.
[1] See the Wiki page or the Cryptii translator
|
https://freecontent.manning.com/password-strength-generating-a-secure-and-memorable-password/
|
CC-MAIN-2022-05
|
refinedweb
| 5,409
| 65.62
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.