text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
2
results of 2
Hello all,
I am a newbe with Jython for Win98. Why is the doskey functionality lost in
the Jython console? I have installed doskey, but it doesn't work in this
console. If enter
import os; os.system("c:\\Windows\\Command\\doskey.com") then I get the
message that doskey is already installed. It just doesn't work with the
arrow keys. Is there, in general, a list of commands for this console
available?
Thanks in advance
Johannes
On Friday, February 21, 2003, at 03:05 PM, "Vijay Anand R."
<Vijayanandr@...> wrote:
> I want to know how to program the sockets using jython. can anyone
> me. i am new to jython. a sample prg would be better for me
Try this:
Mario Diana | http://sourceforge.net/p/jython/mailman/jython-users/?viewmonth=200302&viewday=22&style=flat | CC-MAIN-2015-18 | refinedweb | 155 | 77.84 |
Asked by:
Handlers for Image
Question
- User-534649585 posted
I have a web form with a table that i populate with images.
Following is the code for retrieving the images from the database.
I have this problem
The page loads for example with 6 items and then each items fires
a handler for the image but everytimes i have less images than i expect.
And the images are in the database
I am debugging locally with the ASP webserver
My question is
Is possible that the DataBase server cannot handle many requests?
Or the Webserver cannot handle the requests?
If i try to debug the handler the code is reentrant and seems many
requests are concurrent.
This happens only on some machine and never happened on the production one.
Could you help me?
Regards
<%@ WebHandler Language="C#" class="ShowThumbnails" %>
using System; using System.Web; using System.IO;
using System.Data; using MySql.Data.MySqlClient;
using System.Drawing.Imaging;
public class ShowThumbnails : IHttpHandler {
public void ProcessRequest(HttpContext context) {
Int32 picid;
Int32 width;
Int32 height;
if (context.Request.QueryString["id"] != null)
{ picid = Convert.ToInt32(context.Request.QueryString["id"]);
width=Convert.ToInt32(context.Request.QueryString["width"]);
height = Convert.ToInt32(context.Request.QueryString["height"]);
} else
return;
context.Response.ContentType = "image/jpeg";
Stream strm = ShowThumbnail(picid);
System.Drawing.Image image = System.Drawing.Image.FromStream(strm);
System.Drawing.Image thumbnailImage = image.GetThumbnailImage(width, height,new System.Drawing.Image.GetThumbnailImageAbort(target), IntPtr.Zero);
Stream imageStream = new MemoryStream();
thumbnailImage.Save(imageStream, System.Drawing.Imaging.ImageFormat.Jpeg);
byte[] buffer = new byte[4096];
imageStream.Position = 0;
int byteSeq = ((Stream)imageStream).Read(buffer, 0, 4096);
while (byteSeq > 0)
{ context.Response.OutputStream.Write(buffer, 0, byteSeq);
byteSeq = imageStream.Read(buffer, 0, 4096); }
}
private bool target() {
return false; }
public Stream ShowThumbnail(int picid)
{ DataDB db = new DataDB();
db.Connect(true);
MySqlConnection connection = db.Connection;
string sql = "SELECT image FROM items WHERE id =" + picid.ToString();// @ID";
MySqlCommand cmd = new MySqlCommand(sql, connection);
cmd.CommandType = CommandType.Text;
object img = cmd.ExecuteScalar();
try {
return new MemoryStream((byte[])img);
}
catch {
return null;
}
finally {
connection.Close();
}
}
public bool IsReusable { get { return false; } }
}Monday, February 18, 2013 1:33 AM
All replies
- User-760709272 posted
Does it work when you're not debugging it?Monday, February 18, 2013 4:28 AM
- User-534649585 posted
Debugging on a different phisical machine it works.
When publish on a production site it works.
Debugging on Windows 7 home edition it does not work
Debugging on virtual machine XP hosted on windows 7 it does not work
I moved the site from an old machine where i developed the site on a
new machine and start to see this problem....
RegardsMonday, February 18, 2013 5:02 AM
- User-760709272 posted
I wouldn't worry about it. When debugging the same code in multiple threads it can get awkward as your code "jumps around" as the different threads are in different sections, and you have to remember that you're tying up all the threads so what is probably happening is that some requests are just timing out which is why some of your images are missing when you debug, but not when you run naturally. If you want to test your handler then use it on a page that only has one image so you can step through it better.Monday, February 18, 2013 5:27 AM
- User-534649585 posted
Please, is there a way not to timeout the handler?
What is supposed to time out, the database connection or the webserver?
If image is empty, means that ExecuteScalar() returns null. So
is it possible the problem is coming from the Database?
I could simply change the timeout of the connection to easily debug.
RegardsMonday, February 18, 2013 7:39 AM
- User-760709272 posted
It could be the browser saying that it simply isn't going to wait any longer. Modern browsers have good development tools if you press F12 and you can see the network activity which might help decide if it is timing out. Or you can use something like Fiddler2 which is independent of your browser.
If you view a page with 6 images, and you have a breakpoint in your handler code then the browser will execute 6 requests (depending on the browser and its settings, it might request less) and each of those requests is blocked until you have stepped through them. So the first few requests might get done in time, but by the time you have stepped through 4 or 5 processes, the 6th process might have taken so long to execute that either the browser gave up, or IIS terminated the thread to protect its own resources.
As long as it doesn't happen when you're not debugging, I don't see that it's a problem.Monday, February 18, 2013 7:51 AM
- User-534649585 posted
Probably you are correct, it is something related to timeout.
But now i have some other problems, not only images are empty
also the query on the records return empty.
Need to close Visual Studio and reopen.
Then i see the records. I think it is something connected to the database...
RegardsTuesday, February 19, 2013 12:43 AM
- User-534649585 posted
With MySql 5.1.44 works well.Tuesday, February 19, 2013 7:50 AM | https://social.msdn.microsoft.com/Forums/en-US/c6e92276-f49b-4b54-9566-417b109e3343/handlers-for-image?forum=asphttp | CC-MAIN-2022-33 | refinedweb | 878 | 57.98 |
Python alternatives for PHP functions
import os
os.setgid(gid)
(PHP 4, PHP 5)
posix_setgid — Set the GID of the current process
Set the real group ID of the current process. This is a
privileged function and needs appropriate privileges (usually
root) on the system to be able to perform this function. The
appropriate order of function calls is
posix_setgid() first,
posix_setuid() last.
Note:
If the caller is a super user, this will also set the effective
group id.
The group id.
Returns TRUE on success or FALSE on failure.
Example #1 posix_setgid() example
This example will print out the effective group id, once it is changed.
<?phpecho 'My real group id is '.posix_getgid(); //20posix_setgid(40);echo 'My real group id is '.posix_getgid(); //40echo 'My effective group id is '.posix_getegid(); //40?> | http://www.php2python.com/wiki/function.posix-setgid/ | CC-MAIN-2017-51 | refinedweb | 131 | 68.26 |
Hello,
I'm just starting with C++, and I'm making a couple really simple programs. I have two questions about the one I'm on right now. They are:
1. Is there a function that will stop the script from running like exit(); or something? For example, if I only wanted the script to run if a user entered a number between 0 and 6 and they entered 20, I'd want the script to return a message saying "That was an invalid number. Please try again!" and stop from going on. How would I do this?
2. How do you get the actual value of something in an enum and print it? I made an enum called Days, and it has the name of the days from Sunday to Monday. Then the program asks the user to put in a number. How can I match that number with the value of the thing in the enum and print out the name of the day? Lol, for example:
Now how would I print out the day that the number they put in represents in the enum?Now how would I print out the day that the number they put in represents in the enum?Code:
#include <iostream>
int main() {
enum Days {Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday};
int x, DayOff;
cout <<"What day would you like off (0-6)? ";
cin >> x;
DayOff = Days(x);
}
Thanks,
CougarElite | http://cboard.cprogramming.com/cplusplus-programming/63311-two-easy-questions-printable-thread.html | CC-MAIN-2015-27 | refinedweb | 238 | 82.24 |
This might be a really beginer's question but I've been reading about this and I'm finding it hard to understand.
This is a sample from the msdn page about this subject (just a little smaller).
using System; class SetByteDemo { // Display the array contents in hexadecimal. public static void DisplayArray(Array arr, string name) { // Get the array element width; format the formatting string. int elemWidth = Buffer.ByteLength(arr) / arr.Length; string format = String.Format(" {{0:X{0}}}", 2 * elemWidth); // Display the array elements from right to left. Console.Write("{0,7}:", name); for (int loopX = arr.Length - 1; loopX >= 0; loopX--) Console.Write(format, arr.GetValue(loopX)); Console.WriteLine(); } public static void Main() { // These are the arrays to be modified with SetByte. short[] shorts = new short[2]; Console.WriteLine("Initial values of arrays:\n"); // Display the initial values of the arrays. DisplayArray(shorts, "shorts"); // Copy two regions of source array to destination array, // and two overlapped copies from source to source. Console.WriteLine("\n" + " Array values after setting byte 1 = 1 and byte 3 = 200\n"); Buffer.SetByte(shorts, 1, 1); Buffer.SetByte(shorts, 3, 10); // Display the arrays again. DisplayArray(shorts, "shorts"); Console.ReadKey(); } }
SetByte should be easy to understand, but if I print the shorts array before doing the
SetByte operation the array looks like this
{short[2]} [0]: 0 [1]: 0
After doing the first
Buffer.SetByte(shorts, 1, 1); the array becomes
{short[2]} [0]:
The .NET types use little endianness. That means that the first byte (0th, actually) of a
short,
int, etc. contains the least significant bits.
After setting the array it seems like this as
byte[]:
0, 1, 0, 10
As
short[] it is interpreted like this:
0 + 1*256 = 256, 0 + 10*256 = 2560
The Buffer class allows you to manipulate memory as if you were using a void pointer in c, it's like a sum of memcpy, memset, and so on to manipulate in a fast way memory on .net .
When you passed the "shorts" array, the Buffer class "sees" it as a pointer to four consecutive bytes (two shorts, each of them two bytes) :
|[0][1]|[2][3]| short short
So the uninitialized array looks like this:
|[0][0]|[0][0]| short short
When you do
Buffer.SetByte(shorts, 1, 1); you instruct the Buffer class to change the second byte on the byte array, so it will be:
|[0][1]|[0][0]| short short
If you convert the two bytes (0x00, 0x01) to a short it is 0x0100 (note as these are the two bytes one after other, but in reverse order, that's because the C# compiler uses little endianness), or 256
The second line basically does the same
Buffer.SetByte(shorts, 3, 10);changes third byte to 10:
|[0][1]|[0][10]| short short
And then 0x00,0x0A as a short is 0x0A00 or 2560.
i think the part that people might struggle with is that the
Buffer.SetByte() method is basically iterating over the array differently than a regular assignment with the array indexer [], which would separate the array according to the width of the containing type(shorts/doubles/etc.) instead of bytes... to use your example:
the short array is usually seen as
arr = [xxxx, yyyy](in base 16)
but the SetByte method "sees" it as:
arr = [xx, yy, zz, ww]
so a call like
Buffer.SetByte(arr, 1, 5) would address the second byte in the arry, which is still inside the first short. setting the value there and that's it.
the result should look like:
[05 00, 00 00] in hex or [1280,0]. | http://www.devsplanet.com/question/35266737 | CC-MAIN-2017-09 | refinedweb | 605 | 71.85 |
for example i have a public class Title with publicfunction destroy(); in it
than in my main class i create 1 sprite and some Titles than i sprite.addChild(title) for every title i have created which works ok.
i have tried invoking destroy method manually and it also works fine but when i try the following code flash builder reports error
error: 1061: Call to a possibly undefined method destroy through a reference with static type flash.displayerror: 1061: Call to a possibly undefined method destroy through a reference with static type flash.displayCode:while(sprite.numChildren > 0){ sprite.removeChildAt(0); sprite.getChildAt(0).destroy(); }
isplayObject.isplayObject.
I think the problem might be builder mybe can't know that every child of sprite will be able to provide destroy(); method ( or in other words it shall be instance of class Title) ... but how to tell him that it's ok without exiting strict mode? | http://www.kirupa.com/forum/showthread.php?356450-getChildAt-(strict-normal)-problem | CC-MAIN-2014-15 | refinedweb | 155 | 59.64 |
Question
The cube, (), can be permuted to produce two other cubes: () and (). In fact, is the smallest cube which has exactly three permutations of its digits which are also cube.
Find the smallest cube for which exactly five permutations of its digits are cube.
Haskell
import Data.List (sort) import qualified Data.Map as Map cubes :: Map.Map String [Integer] cubes = Map.fromListWith (++) [(sort (show cube), [cube]) | x <- [1..10000], let cube = x^3] main :: IO () main = print $ minimum [minimum ns | (_, ns) <- Map.toList cubes, length ns == 5]
$ ghc -O2 -o cubic-permutations cubic-permutations.hs $ time ./cubic-permutations real 0m0.043s user 0m0.040s sys 0m0.000s
Python
#!/usr/bin/env python from collections import defaultdict def cube(x): return x**3 def main(): cubes = defaultdict(list) for i in range(10000): c = cube(i) digits = ''.join(sorted([d for d in str(c)])) cubes[digits].append(c) print(min([min(v) for k, v in list(cubes.items()) if len(v) == 5])) if __name__ == "__main__": main()
$ time python3 cube-permutations.py real 0m0.048s user 0m0.044s sys 0m0.000s | https://zach.se/project-euler-solutions/62/ | CC-MAIN-2018-51 | refinedweb | 183 | 72.53 |
mkfifo - make a FIFO special file (a named pipe)
Synopsis
Description
Errors
Colophon
#include <sys/types.h> #include <sys/stat.h>
int mkfifo(const char *pathname, mode_t mode);
mkfifo() makes a FIFO special file with name pathname. mode specifies the FIFOs permissions. It is modified by the processs.
On success mkfifo() returns 0. In the case of an error, -1 is returned (in which case, errno is set appropriately).
POSIX.1-2001.
mkfifo(1), close(2), open(2), read(2), stat(2), umask(2), write(2), mkfifoat(3), fifo(7)
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.sgvulcan.com/mkfifo.3.php | CC-MAIN-2018-26 | refinedweb | 118 | 61.33 |
Setup Error Logging in Serverless
Now that we have our React app configured to report errors, let’s move on to our Serverless backend. Our React app is reporting API errors (and other unexpected errors) with the API endpoint that caused the error. We want to use that info to be able to debug on the backend and figure out what’s going on.
To do this, we’ll setup the error logging in our backend to catch:
- Errors in our code
- Errors while calling AWS services
- Unexpected errors like Lambda functions timing out or running out of memory
We are going to look at how to setup a debugging framework to catch the above errors, and have enough context for us to easily pinpoint and fix the issue. We’ll be using CloudWatch to write our logs, and we’ll be using the log viewer in Seed to view them.
Setup a Debug Lib
Let’s start by adding some code to help us with that.
Create a
libs/debug-lib.js file and add the following to it.
import AWS from "aws-sdk"; import util from "util"; // Log AWS SDK calls AWS.config.logger = { log: debug }; let logs; let timeoutTimer; export function init(event, context) { logs = []; // Log API event debug("API event", { body: event.body, pathParameters: event.pathParameters, queryStringParameters: event.queryStringParameters, }); // Start timeout timer timeoutTimer = setTimeout(() => { timeoutTimer && flush(new Error("Lambda will timeout in 100 ms")); }, context.getRemainingTimeInMillis() - 100); } export function end() { // Clear timeout timer clearTimeout(timeoutTimer); timeoutTimer = null; } export function flush(e) { logs.forEach(({ date, string }) => console.debug(date, string)); console.error(e); } export default function debug() { logs.push({ date: new Date(), string: util.format.apply(null, arguments), }); }
We are doing a few things of note in this simple helper.
Enable AWS SDK logging
We start by enabling logging for the AWS SDK. We do so by running
AWS.config.logger = { log: debug }. This is telling the AWS SDK to log using our logger, the
debug()method (we’ll look at this below). So when you make a call to an AWS service, ie. a query call to the DynamoDB table
dev-notes, this will log:
[AWS dynamodb 200 0.296s 0 retries] query({ TableName: 'dev-notes', KeyConditionExpression: 'userId = :userId', ExpressionAttributeValues: { ':userId': { S: 'USER-SUB-1234' } } })
Note, we only want to log this info when there is an error. We’ll look at how we accomplish this below.
Log API request info
We initialize our debugger by calling
init(). We log the API request info, including the path parameters, query string parameters, and request body. We do so using our internal
debug()method.
Log Lambda timeouts
If your code takes long to run and it reaches the timeout value for the Lambda function, the function will timeout. By default, this value is set to 6s. When this happens, we won’t get a chance to handle it in our debugger. To get around this, we can find out how much time there is left in the current execution by calling
context.getRemainingTimeInMillis(). This is an internal Lambda function. We then create a timer that will automatically print our log message 100ms before the Lambda times out.
Note, there could be false positives where the Lambda finishes executing within the last 100ms of the execution time. But that should be a very rare event.
Finally, we cancel this timer in the case where the Lambda function completed execution within the timeout.
Log only on error
We log messages using our special
debug()method. Debug messages logged using this method only get printed out when we call the
flush()method. This allows us to log very detailed contextual information about what was being done leading up to the error. We can log:
- Arguments and return values for function calls.
- And, request/response data for HTTP requests made.
We only want to print out debug messages to the console when we run into an error. This helps us reduce clutter in the case of successful requests. And, keeps our CloudWatch costs low!
To do this, we store the log info (when calling
debug()) in memory inside the
logsarray. And when we call
flush()(in the case of an error), we
console.debug()all those stored log messages.
So in our Lambda function code, if we want to log some debug information that only gets printed out if we have an error, we’ll do the following:
import { log } from "../libs/debug-lib"; log('This stores the message and prints to CloudWatch if Lambda function later throws an exception');
In contrast, if we always want to log to CloudWatch, we’ll:
console.log('This prints a message in CloudWatch prefixed with INFO'); console.warn('This prints a message in CloudWatch prefixed with WARN'); console.error('This prints a message in CloudWatch prefixed with ERROR');
Now let’s use the debug library in our Lambda functions.
Setup Handler Lib
You’ll recall that all our Lambda functions are wrapped using a
handler() method. We use this to format what our Lambda functions return as their HTTP response. It also, handles any errors that our Lambda functions throws.
We’ll use the debug lib that we added above to improve our error handling.
Replace our
handler-lib.js with the following.
import * as debug from "./debug-lib"; export default function handler(lambda) { return function (event, context) { return Promise.resolve() // Start debugger .then(() => debug.init(event, context)) // Run the Lambda .then(() => lambda(event, context)) // On success .then((responseBody) => [200, responseBody]) // On failure .catch((e) => { // Print debug messages debug.flush(e); return [500, { error: e.message }]; }) // Return HTTP response .then(([statusCode, body]) => ({ statusCode, headers: { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Credentials": true, }, body: JSON.stringify(body), })) // Cleanup debugger .finally(debug.end); }; }
This should be fairly straightforward:
- We initialize our debugger by calling
debug.init().
- We run our Lambda function.
- We format the success response.
- In the case of an error, we first write out our debug logs by calling
debug.flush(e). Where
eis the error that caused our Lambda function to fail.
- We format our HTTP response.
- And finally, we clean up our debugger by calling
debug.end().
Using the Error Handler
You might recall the way we are currently using the above error handler in our Lambda functions.
import handler from "./libs/handler-lib"; export const main = handler((event, context) => { // Do some work const a = 1 + 1; // Return a result return { result: a }; });
We wrap all of our Lambda functions using the error handler.
Note that, the
handler-lib.js needs to be imported before we import anything else. This is because the
debug-lib.js that it imports needs to initialize AWS SDK logging before it’s used anywhere else.
Commit the Code
Let’s push our changes
Let’s commit the code we have so far.
$ git add . $ git commit -m "Adding error logging" $ git push
And promote the changes to production.
Head over to the Seed console and hit Promote to prod once your changes are deployed to to dev.
Enable Access Logs
The combination of our new error handler and Lambda logs will help us catch most of the errors. However, we can run into errors that don’t make it to our Lambda functions. To debug these errors we’ll need to look at the API Gateway logs. So let’s go head and enable access logs for our API.
From the dashboard for your app on Seed, select the prod stage.
Then hit Enable Access Logs for your API.
And that’s pretty much it! With these simple steps, we are now ready to look at some examples of how to debug our Serverless app.
For reference, here is the complete code for the backend.Backend Source
For help and discussionComments on this chapter | https://serverless-stack.com/chapters/setup-error-logging-in-serverless.html | CC-MAIN-2020-24 | refinedweb | 1,298 | 67.76 |
How old are you? Simple question, yes? Perhaps. But the answer could change depending on the granularity of the question, and on your perspective.
To answer "how many minutes old are you?", we need a lot more information than might be readily available:
- Your date and exact time of birth
- The time zone (or location) of your birth
- If you happened to be born during a daylight saving time fall-back transition, when (in some time zones) clocks moved backwards - for example, ticking from 2:59:59 AM back to 2:00:00 AM. If you were born at 2:30, you'll need to know which instance of 2:30.
- The current universal time, expressed in UTC.
Most people do not have all of this information on a regular basis, nor do we usually care. When someone asks how long one has been alive, dead, married, employed, or any other similar question, they are usually looking for a _calendrical _answer like "30 years", or "6 months" or sometimes "2 years, 4 months and 5 days".
A calendrical unit of measure is a humanized expression of a passage of time. This is quite different than an elapsed duration of time that computer's typically measure, which often creates confusion for developers. For example, one of the first questions to be asked on Stack Overflow was "How do I calculate someone's age in C#", which has many hundreds of votes and over 70 different answers (not all of them accurate).
Here's what you need to realize, which is often missing from the discussion and the algorithm:
- We need to compute a calendric answer, not an elapsed duration of time.
- That implies the use of a calendar, and we usually care about the Gregorian calendar in modern times.
- We usually have only a date on that calendar to use as a reference. We don't usually have a time of day or a time zone. Even if we did, it wouldn't be relevant for the desired output.
- Time zones control how each of our views of this calendar are aligned to the universal instantaneous timeline.
- Our answer will vary not by the time zone of our birth, but by the time zone of where we are physically located when we ask the question!
That last part has some really interesting implications. Want to stay younger longer? Travel west! As anyone who's flown on a plane going westbound near sunset can tell you - your perception of the "day" will elongate if you can keep up with the rotation of the earth. Of course, that idea has it's limitations.
The best approach for calculating calendrical periods in C# (or any .Net language) is by using the
Period class in the Noda Time library:
using NodaTime; Period CalculateAgeFrom(int year, int month, int day, string targetTimeZone, PeriodUnits units = PeriodUnits.YearMonthDay) { Instant now = SystemClock.Instance.Now; LocalDate today = now.InZone(DateTimeZoneProviders.Tzdb[targetTimeZone]) .Date; LocalDate referenceDate = new LocalDate(year, month, day); return Period.Between(referenceDate, today, units); }
This can be used in a variety of different ways. For example:
Period age = CalculateAgeFrom(1976, 8, 27, "America/New_York"); Console.WriteLine("Today, I am: {0} years, {1} month(s), and {2} day(s) old.", age.Years, age.Months, age.Days);
Note that the time zone of "America/NewYork" I'm passing here is because as I'm writing this, I'm _physically located in the US Eastern Time zone. I was born in Arizona, and tomorrow I'll be in Seattle, but that does not matter for this question.
Sometimes it might not be possible to know the current physical whereabouts of a person. In that case, you should use the time zone of the person who is asking the question. At least the response will be accurate from their point of view.
If you'd like a simpler form of the above code, consider helper methods such as:
int CalculateAgeInYearsFrom(int year, int month, int day, string targetTimeZone) { Period age = CalculateAgeFrom(year, month, day, targetTimeZone, PeriodUnits.Years); return (int) age.Years; } int CalculateAgeInDaysFrom(int year, int month, int day, string targetTimeZone) { Period age = CalculateAgeFrom(year, month, day, targetTimeZone, PeriodUnits.Days); return (int) age.Days; }
If you're a Java developer, a similar approach can be taken with the
Period class found in Java.Time (Java 8), or the one from Joda Time. I'm sure there are options in other languages also.
Can you calculate calendric periods in .Net without using Noda Time? Sure, but some calculations are much harder than others, and the subtleties are more important than you think. I don't recommend trying. | https://codeofmatt.com/2014/04/09/handling-birthdays-and-other-anniversaries/ | CC-MAIN-2018-39 | refinedweb | 774 | 64.41 |
In this article, we are going to create our first Windows Phone 7 application running the new "Mango" update. We will start by creating the application in Visual Studio, and then we will analyze the code that makes it up to start to gain some familiarisation with Silverlight and XAML. Once we have grasped the code, we will see how we can quickly modify the generated code to display different text and react to a button click.
Recently Microsoft announced the release of an update[^] to Windows Phone 7, codenamed Mango, which brings a whole raft of top notch functionality to the hands of WP7 developers. It occurred to me that, while there are many great WP7 articles here on CodeProject, there isn't a resource that teaches WP7 development from the ground up. To solve that, it seems that we need a series of articles that try to teach WP7 in a friendly and simple fashion.
These articles don't assume that you have written any WP7, XAML (pronounced ZAMEL), XNA or that you know what Silverlight and XNA are, or how things such as Dependency Properties and databinding work. Hopefully, by the end of the series, you'll have learned enough to be able to tackle developing WP7 applications easily and with confidence.
Some of the articles will demonstrate how to use Expression Blend, but don't worry if you haven't got a copy - I'm just going to be using it for styling parts of the user interface, and you should feel free to copy the templates I'll be producing. The primary tool we'll be using in these articles is Visual Studio 2010. Where we are using Silverlight, we are going to ensure that we follow the guidelines for designing Metro applications. Metro refers to the look and feel of WP7 applications, along with how the application responds; a guiding principle of WP7 application is that it must fit in with the other applications that run on the phone and be easy to get to grips with for somebody who's used to Metro apps, but who hasn't used your application before.
Rather than rehash a lot of what has already been written about the history of Windows Phone 7, I'd suggest that you read this[^] if you are interested in the history of WP7. The points of interest for us is that development on WP7 can be done using a version of Silverlight developed to take advantage of the features of the phone, and a version of XNA (a great API designed for developing games that run on the phone, the XBOX and a Windows PC) to develop games for the phone and by the end of the series of articles, we'll have used both.
You might find the following links useful while you are reading this article.
I'm assuming that you are familiar with standard .NET and Visual Studio concepts such as namespaces, classes and code-behind files. It's not going to teach you how to code in C# (or VB.NET if that's your language of choice).
The classic application when getting started is Hello World, and this application is going to be no exception. So, let's buckle up and enjoy the ride into our Windows Phone world by firing up the old Visual Studio.
Once Visual Studio is open, select File > New > Project to display the New Project dialog. In the list of installed templates, look for the section Silverlight for Windows Phone and choose Windows Phone Application (I'm going to use C#, so the templates are installed under the Visual C# node). Let's name it MyFirstPhoneApplication and click OK to create the application.
MyFirstPhoneApplication
If you've installed any previous versions of the WP7 SDK, you'll now see a dialog asking you to choose the platform you want to target. Choose "Windows Phone 7.1".
If all has gone according to plan, we should have a solution that looks like this (don't worry if you don't have the All Open Unsaved Edited part - it's from an addin that I have installed on my development environment):
XAML stands for XML Application Markup Language. Basically, in WP7, XAML allows you to lay out user interfaces declaratively, and that's what these files contain. OK, that sounds great but what does it actually mean?
When you create just about any type of user interface, there's an implicit parent-child relationship in there. Typically, you'd have a top level form which would have a collection of child controls. Some of these child controls may contain collections of child controls. Well, XAML allows you to represent this hierarchy using XML to identify the different controls and what they belong to along with some of the properties of these controls. The important thing to remember is that anything you can do in XAML, you can do using straightforward C# or VB.NET (but it is easier to do this in the XAML). But why do we need it? Well, XAML allows designers to layout the user interface without having to know how to write any code and, once you get used to it, it does become very natural to layout your interface in.
As you're aware, in order to run an application, we have to have an entry point. Well, in WP7 this is no different, and the entry point is the Application. The default location of the Application class is in these two files (whenever you see .xaml.cs, this tells you that this is a code-behind file for a .xaml file). Let's take our first look at some XAML and see what's in that App.xaml file.
<Application
x:Class="MyFirstPhoneApplication.App"
xmlns=""
xmlns:x=""
xmlns:phone="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone"
xmlns:
<!--Application Resources-->
<Application.Resources>
</Application.Resources>
<Application.ApplicationLifetimeObjects>
<!--Required object that handles lifetime events for the application-->
<shell:PhoneApplicationService
</Application.ApplicationLifetimeObjects>
</Application>
"Whoah Pete. That's some scary looking stuff there." I hear you say. Fear not, for I am here to tell you that this stuff is nowhere near as scary as it looks. Let's break this down and figure out what this all means.
<Application
x:Class="MyFirstPhoneApplication.App"
...
</Application>
Remember that I said this file was based on XML? It may come as no surprise that this file conforms to the rules of XML so the opening tag must have a balancing closing tag. In this case, the tag is Application, which tells the compiler that this is the XAML containing the Application definition. The next line simply tells the compiler what the namespace and class name is for this particular file. If you are familiar with ASP.NET, you should recognise this as being similar to the Page directive at the top of your .aspx file, with the Inherits tag telling the compiler what the ASPX inherits.
Application
Application
Page
Inherits
xmlns=""
xmlns:x=""
xmlns:phone="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone"
xmlns:
These lines allow you to use functionality present in .NET namespaces directly in the XAML. There are two ways of hooking namespaces in; either by specifying a URI which will have been published as an XML namespace definition in the assembly in question, or by using the clr-namespace format (the assembly part tells the compiler which DLL the namespace is defined in if it's not in the current assembly). We'll cover namespace definition in more depth in a later article when we look at adding new assemblies and interacting with them in the XAML).
clr-namespace
Suffice it to say, if we need to interact with something that's in a namespace other than the default one covered by xmlns="", then we need to prefix the element with the namespace name we've set up here. An example of an element that's in the default namespace is Application (which is why it doesn't need to be prefixed with anything).
<!--Application Resources-->
<Application.Resources>
</Application.Resources>
Resources are items that can be reused throughout the XAML such as brushes or templates and styles. We'll cover resources in depth in a later article, but any resources that we want to be usable across any XAML page in the current application would be placed in this section. This saves us having to copy the same elements into different pages - you can almost think of this as being like a CSS file that has been included into every page.
<Application.ApplicationLifetimeObjects>
<!--Required object that handles lifetime events for the application-->
<shell:PhoneApplicationService
</Application.ApplicationLifetimeObjects>
This section is actually pretty cool because of what ApplicationLifetimeObjects do for us. Rather than having to subclass the Application class to add extra functionality, we can use this section to list extensions (yes, they are standard extension methods) that extend the Application class. WP7 provides a standard extension called the PhoneApplicationService (note the use of the shell: to tell us that it's in the Microsoft.Phone.Shell namespace). So, what does this class give us? Well, it provides access to methods that are associated with various aspects of the application's lifetime such as when it's launched. The four that are listed here as attributes (this is the easy way to add properties and events to an object in XAML) relate to the Launching, Closing, Activated and Deactivated events, and the event handlers live in the file App.xaml.cs.
ApplicationLifetimeObjects
PhoneApplicationService
Microsoft.Phone.Shell
Well, we've seen that there's code linking into the App.xaml.cs file, so what does it look like? Rather than listing the whole file out, let's open up the code in Visual Studio and I'll explain what each bit does.
Phew, that's a lot of code in there, but what does it do. Again, it's easier to understand if we break it down into little bits. This time, we aren't going to cover all the code as we really don't need to discuss the using statements, the namespace and class definitions. Right, let's look at the RootFrame property.
using
RootFrame
public PhoneApplicationFrame RootFrame { get; private set; }
All WP7 pages are displayed inside a frame, and this frame is accessible through this property. If we were to think of this in terms of a browser based application, then the RootFrame would be the equivalent of the web browser itself, and the pages would be individual HTML pages.;
}
}
This is the constructor for our App class, so it's called as soon as the class is initialised. At this stage, there are no visual items created, and nothing to hook into visually, so it's important not to add anything here that relies on visual elements being displayed.
App
The UnhandledException line is generally good practice in our applications because it provides a top-level error handler that, in all but the most extreme cases, is guaranteed to be called. The error handler is actually handled in the method Application_UnhandledException.
UnhandledException
Application_UnhandledException
If we look in the class, we won't find any implementation for the InitializeComponent method, but when we compile the code there are no errors here. So what is this? Is there a magical setting in the compiler that doesn't generate compilation errors when it encounters InitializeComponent methods in the code? Not surprisingly, this isn't the case - the real reason is much more mundane, and the clue lies in the class definition, this is a partial class. When the application is compiled, code is created for us behind the scenes in special .g.cs files, and InitializeComponent is implemented in this class.
InitializeComponent
The next line simply calls the InitializePhoneApplication method defined later on in the class. We'll cover that method shortly.
InitializePhoneApplication
The next section covers the behaviour of the application when the debugger is attached. Rather than covering them line by line, I'll give you a brief overview here of what these properties are used for (including the commented out properties). At this stage, even though we haven't finished writing our program, let's build and run it. When we run the application, it opens up the windows phone emulator (note the Windows Phone Emulator option in the toolbar).
When the emulator is firing up, it looks like this:
Once the application has loaded, it looks like this:
OK, so that was an interesting little diversion (I hope), but you may be wondering what we are doing here when I promised that I would explain the bits inside the Debugger.IsAttached section. Well, if you look carefully at the image above, you'll see what looks like interference. If I rotate the image, resize and crop it a bit, we might get a hint that there is something more going on here.
Debugger.IsAttached
So, what are those numbers? Well, they are the frame rate counters that have been enabled by the line Application.Current.Host.Settings.EnableFrameRateCounter = true;. From the left, these numbers are:
Application.Current.Host.Settings.EnableFrameRateCounter = true;
If we uncommented Application.Current.Host.Settings.EnableRedrawRegions = true; we would be able to see the items that are being redrawn every frame. If something is been drawn by the GPU, we will not see a redraw here; this is the ideal that we want - redraws being handled by the GPU.
Application.Current.Host.Settings.EnableRedrawRegions = true;
The line Application.Current.Host.Settings.EnableCacheVisualization = true; is an interesting one. This tells us what is not being redrawn by the GPU by applying a coloured tint to it. If an item is handled by the GPU and cached, it will not be tinted.
Application.Current.Host.Settings.EnableCacheVisualization = true;
The line PhoneApplicationService.Current.UserIdleDetectionMode = IdleDetectionMode.Disabled; comes with a big comment warning for a reason. In the standard phone application mode, the idle detection allows the phone to conserve resources and allows it to "hibernate" when the application has been idle for a period of time. If we switched this functionality off in the released version, our application would end up consuming power as it would not go into idle mode.
PhoneApplicationService.Current.UserIdleDetectionMode = IdleDetectionMode.Disabled;
WP7, in common with most smart phones, supports a graphics processor unit (GPU) which can be used to improve the performance of graphical applications. In general, we can let Silverlight take care of delegating the work to the GPU for us, but there are some rules that must be followed to support this behaviour. Throughout this series, we'll see areas and rules that help us identify whether or not something runs on the GPU.
Now, let's get back to the code.
// Code to execute when the application is launching (eg, from Start)
// This code will not execute when the application is reactivated
private void Application_Launching(object sender, LaunchingEventArgs e)
{
}
//)
{
}
These are the methods that were hooked up as part of the PhoneApplicationService extensions. In future articles, we'll find out how and why we need to use these methods.
PhoneApplicationService
private void RootFrame_NavigationFailed(object sender, NavigationFailedEventArgs e)
{
if (System.Diagnostics.Debugger.IsAttached)
{
// A navigation has failed; break into the debugger
System.Diagnostics.Debugger.Break();
}
}
This method breaks to the debugger, if it's attached, when navigation fails.
private void Application_UnhandledException
(object sender, ApplicationUnhandledExceptionEventArgs e)
{
if (System.Diagnostics.Debugger.IsAttached)
{
// An unhandled exception has occurred; break into the debugger
System.Diagnostics.Debugger.Break();
}
}
This is the event handler for coping with unhandled exceptions. In future articles, we are going to delve into making this a more feature rich method, and take a look at how to make the exception handling actually do something meaningful. Suffice it to say, the default implementation only breaks to the debugger if it's attached, which isn't very useful in deployed situations.
}
}
The final piece of the App.xaml.cs puzzle lies in these two methods. The use of the field phoneApplicationInitialized is there to guard against this method being called twice. The RootFrame property is initialized and the Navigated and NavigationFailed events are hooked up. Finally, the root visual object is set to be the root frame and the navigated event is dereferenced.
phoneApplicationInitialized
Navigated
NavigationFailed
That's almost it for the App.xaml files. There's one final piece of the jigsaw to sort out, how does the application know that the App class is the startup? Well, if we open up the project properties dialog and take a look in the Application tab, we can see that the startup object is set to MyFirstPhoneApplication.App. Now it's time to take a look at the MainWindow functionality.
MyFirstPhoneApplication.App
MainWindow
Let's take a look at the contents of MainPage.xaml. Don't worry if it looks cryptic at the moment, as we are going to break it down and discuss the different parts and see how they all fit together.
First of all, let's look at the actual code.
>
Well, that looks fairly scary, but you'll be pleased to know that it is actually fairly easy to understand.
=""
...
</phone:PhoneApplicationPage>
Just as in the App.xaml file, the opening attributes set up the class behind the XAML and add in the appropriate namespaces for use in the XAML. As we can see here, the phone pages inherit from the PhoneApplicationPage type which is the equivalent of an HTML page in a browser, or a Form in a WinForms application.
PhoneApplicationPage
Form
mc:Ignorable="d" d:DesignWidth="480" d:DesignHeight="768"
These attributes are interesting. The d: attributes are Expression Blend tags, and tell the design window what width and height to apply to the page. These sizes could be different from the real width and height of the page, if we wanted to - they are just set to help the UI designer lay out the screen. The mc:Ignorable tag tells the compiler to ignore any namespace that starts with d: in the XAML. Don't worry if this doesn't make too much sense right now; it will become clearer when we start to use the design window more.
d:
mc:Ignorable
FontFamily="{StaticResource PhoneFontFamilyNormal}"
FontSize="{StaticResource PhoneFontSizeNormal}"
Foreground="{StaticResource PhoneForegroundBrush}"
Right, this is going to take a little bit of explaining, so please excuse me while we divert into resources territory.
As you're no doubt well aware, Windows applications allow us to embed resources into applications which we can access in use. Well, WP7 based applications are no different - we can embed resources and access the individual resource quite easily. In Silverlight applications, we use something called a ResourceDictionary to manage our resources. Now, the syntax to access the resource items looks a bit funny, but we have to get used to it because we will use this a lot. Basically, the StaticResource markup tells the compiler that it needs to look something up from a ResourceDictionary and apply it. The use of the {} symbols tells the compiler that it is going to have to bind a value in rather than just apply a string literal.
ResourceDictionary
StaticResource
string
The values for the different resources here are obtained from the standard resource for the phone, so we don't need to add these values ourselves. Details of these resources can be found here[^].
While we could have set the FontFamily to Segoe UI using FontFamily="Segoe UI", the fact that we are using the resource dictionary means that we only need to change the dictionary and recompile it if we want to change the font to Verdana. I didn't just pick Segoe UI here either, this is the current value of PhoneFontFamilyNormal in the resource dictionary. Where possible, using resource dictionaries is a good habit to get into because of how easy it is to change elements that are being reused; in this respect, they are just like constants.
FontFamily
FontFamily="Segoe UI"
PhoneFontFamilyNormal
SupportedOrientations="Portrait" Orientation="Portrait"
These two attributes tell the phone how to display this page when it's viewed (Orientation), and whether it can be displayed in Landscape or Portrait as well (SupportedOrientation).
Orientation
SupportedOrientation
shell:SystemTray.
The system tray is a handy item at the top of the phone display that the user can tap to gain access to information such as the signal strength and battery life. By setting the visibility to true, the user has access to this functionality. We should only change this if we are really sure that your application should not display the system tray because the users are used to it being present.
The following diagram highlights the system tray in red:
<!--LayoutRoot is the root grid where all page content is placed-->
<Grid x:
...
</Grid>
Finally we're getting to the part where visual items are being added. When we want to add controls that are displayed, we need to add them to something that tells the application how to lay out the components. In this particular case, we have a Grid which behaves in a similar fashion to a Table control in HTML, in that it can be used to lay controls out in rows and columns. The attribute x:Name gives the grid a name which can be used by the code behind to get access to the grid and manipulate it if we need to. Finally, the Background attribute here makes the grid transparent.
Grid
Table
x:Name
Background
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
Rather than having to use a <TR> style of syntax to define rows in a table in the way that you would with HTML, XAML allows us to define the rows up front, which is an incredibly powerful feature that we will explore more as we go through the series because it allows us to just change a single value on a control to say where it's displayed. This is an incredible time saver if we are hand editing the XAML as we don't have to cut and paste items to move them into new rows. We will use this feature when we are modifying our application.
<TR>
The Height attribute tells the application how big to make the row, but what are those funny sizes? When the height is set to Auto, the size of the row is based entirely on how big the content is. When the height is referenced as a *, it means that the size is a proportion of the available space, and you will sometimes hear this referred to as star sizing. In this particular case, it means that it uses all the remaining space.
Height
Auto
*
Note: If we don't supply this section, an implicit row definition is added to our grid that takes up the whole size of the grid.
>
The first row of the grid is going to container another control container. This time, it contains something called a StackPanel which tells the application to position controls on separate lines, or all on the same line. By nesting containers, we can create flexible layouts with a minimum of fuss. Again, we'll do a lot more of this throughout the series, so we'll get plenty of practice using the different types of containers. The Grid.Row attribute is an interesting one. If we go to the documentation for the StackPanel and search all night, we won't find any references to Grid.Row on there. Is this an omission on Microsoft's part? Well, while the MSDN is occasionally apocryphal, it's not wrong in this case. Grid.Row is a wonderful thing called an attached property, which we can think of as being a global property that can be attached to any type of object, and it helps control the behaviour of the object. In this case, it tells the grid to position the StackPanel in the first row (note that this is zero based).
StackPanel
Grid.Row
The Margin is used to control the space between elements.
Margin
Now we actually get to some elements that the user can see. In the XAML, we have two TextBlock controls which display the text from the Text attribute.
TextBlock
Text
Okay, we've covered the boilerplate code in quite some depth, and you are probably itching to get your hands dirty. Well, itch no more, we're going to add some code for ourselves.
The first step is to open up MainPage.xaml in Visual Studio. We are going to use the design window rather than edit the XAML directly. If you've never used the designer window, you can get to it using the Design tab (highlighted in red below):
First of all, let's change the text MY APPLICATION into Hello World. Make sure that the Properties tab is open, then click on the text in the design view and the TextBlock that contains it is selected.
The Properties tab should look something like this:
Right, change the text to read Hello World. Then select the page name and remove the text from the Text property. At this stage, our phone application should look like this in the designer:
Now we're going to add a button to the application. Open up the Toolbox and double click on the button control. This adds the button to the top left corner of the phone designer.
Not very attractive is it? Well, we're about to see exactly what that grid row property does. In the properties window, scroll down until you can see the Grid.Row property.
Change the value of to 1, and be amazed as the button moves to the next row. Rather than showing you a screenshot of this stage, let's set a couple of other properties. Set both the HorizontalAlignment and VerticalAlignment properties to Center to move the button. Once you have done this, double click the button to create an event handler in MainPage.xaml.cs, and in the code that's generated, add the following code:
1
HorizontalAlignment
VerticalAlignment
Center
PageTitle.Text = "BOO !!!!";
This line of code sets the text to be displayed in the TextBlock called PageTitle.
PageTitle
Finally, compile the application and run it. It should look like this:
Now press the button and watch the BOO !!!! appear.
The XAML that accomplishes this magic looks like this:
="Hello World"
Style="{StaticResource PhoneTextNormalStyle}"/>
<TextBlock x:
</StackPanel>
<!--ContentPanel - place additional content here-->
<Grid x:</Grid>
<Button Content="Button"
Height="72"
HorizontalAlignment="Center"
Margin="10,10,0,0" Name="button1"
VerticalAlignment="Center" Width="160" Grid.
</Grid>
The most interesting part of that particular code lies in the line Click="button1_Click". This ties the click event of the button up to the button1_Click method in MainPage.xaml.cs.
Click="button1_Click"
button1_Click
That's it, we've created our first WP7 application using both XAML and code-behind that hooks up to the controls defined in the XAML.
As you can see, adding and editing controls in the designer can be a very simple task; the changes we made to the application were accomplished quickly and with the minimum of effort. Once you become comfortable with the tooling, creating XAML can be a relatively painless process, as evidenced by the speed we have managed to modify the code to do what we want.
A final question to ponder. How does the phone know that MainPage is the page to display? After all, there's nothing immediately apparent in the XAML or the code behind to tell the compiler to mark this page, and there's no reference to it in App.xaml. The answer lies in the file WMAppManifest.xml which creates a task that points to MainPage. We aren't going to cover this file in any depth in this article, but we will come back to it in a future article when we see how it affects pinned and live tiles.
In this article, we have created our first WP7 application using Visual Studio, and modified it to display Hello World. Along the way, we have analysed the basics of a XAML application, and started to see how code behind and XAML fit together. Finally, we changed the project to display Hello World and react to a button click.
In future articles, we will expand on the knowledge we have gained here and really start to get a deeper understanding of WP7 development.
The following books could be of some assistance while learning WP7:
I'd like to thank Hans Dietrich, Keith Barrow, DaveAuld, Tom Deketelaere, gavindon and all the other great CodeProject members who have offered their invaluable help in the crafting of this article. If I've forgotten anybody, the fault is mine and mine alone and I apologise unreservedly. Please let me know if I have missed you out and I'll update the list accordingly to reflect your diamond status.
Please, if you feel this article doesn't meet your needs, or that there are things in here that you don't understand or that I haven't explained clearly, please let me know. Your input into this series is invaluable. Please don't worry if you can't remember the syntax or the concepts clearly, we will be covering each area in much more depth as we progress through. | http://www.codeproject.com/Articles/225897/Peeling-the-Mango-Win-Phone-7-Programming-from-the?fid=1639402&df=90&mpp=10&sort=Position&spc=None&tid=4294278&PageFlow=FixedWidth | CC-MAIN-2015-14 | refinedweb | 4,893 | 60.95 |
informative code central example that shows you how to calculate the Australian Golf Handicap using eXpress Persistent Objects (XPO):
Example Details: How to calculate Australian Golf Handicap using XPO
The example demonstrates how to use the ASPxGridView with XPO to calculate the handicap used in Australian golf. The calculation algorithm and some sample data come from the The "Rolling Sample" Handicap Calculation Method article.
The sample features:
And it’s a complete sample. For example, here is the Player.cs file which defines the ‘Player’ class. To see the VB.NET version of this project, click the ‘Programming Language’ dropdown on the sample.
using System;
using System.Linq;
using DevExpress.Xpo;
using DevExpress.Data.Filtering;
using System.Collections.Generic;
public class Player : XPObject {
const Double BonusForExcellense = 0.96;
public Player()
: base() { }
public Player(Session session)
: base(session) { }
public override void AfterConstruction() {
base.AfterConstruction();
}
protected String _Name;
public String Name {
get { return _Name; }
set { SetPropertyValue<String>("Name", ref _Name, value); }
}
[NonPersistent]
public Int32 Handicap {
get {
List<Result> results = LastBestTen();
if (results.Count == 0)
return 0;
Double handicap = 0.0;
handicap = results.Average<Result>(x => x.PlayedTo); // average
handicap *= BonusForExcellense; // avarage * 0.96
handicap = Math.Round(Math.Truncate(handicap * 10) / 10); // 14.496 -> 14.4 -> 14
return Convert.ToInt32(handicap);
}
}
[Association("Player-Results", typeof(Result))]
public XPCollection<Result> Results {
get { return GetCollection<Result>("Results"); }
}
public List<Result> LastBestTen() {
if (Results.Count <= 10)
return Results.ToList<Result>();
XPQuery<Result> results = new XPQuery<Result>(this.Session);
var list1 = (from r in results
where (r.Player == this)
orderby r.Date descending
select r).Take(20).ToList<Result>();
var list2 = (from r in list1
orderby r.PlayedTo ascending
select r).Take(10);
return list2.ToList<Result>();
}
}
You can download and run the Code Central samples direct from your local machine. Just click the ‘Download Source Code’ button for the sample page.
Watch the ‘How To Use Code Central’ video to learn more:
Vest, one of our awesome support engineers, created this sample for one of our awesome customers, Daryn.
Check out the ‘Australian Gold Handicap using XPO’ sample.:
Harry, I am right in the midst of my beginning to learn XPO and jumped at the opportunity to see a real life example of XPO in action. Unfortunately, it is lacking the same simple item as many of your examples - not one single comment line anywhere.
/// <summary>
/// An XPO Session is used generally because blah blah blah
/// Have a look at <a bla bla> for more detailed info</a>
/// </summary>
/// <returns>The session object to be used</returns>
public static Session GetNewSession() {
return new Session(DataLayer);
There is no computer on Earth powerful enough to computer my handicap. It requires the use of NASA super computers and a scientific notation so scary that it can't be mentioned by name.
Hello Glen,
The sample uses an approach from our KB article:
Please take a look at it. It should help you.
Great example, thanks for posting.
Would I be able to coax anyone into doing an XAF version. It would be of great learning value to me.
Thanks,
Louis
Great example, thanks for posing.
Would I be able to coax anyone into doing an XAF version. It would be a great learning example for me.
Hi Harry,
Wonderful example. I have to agree with Louis above. Please get Gary and implement this in XAF. I have already spoken with Gary on twitter. Basically i think what we would like to to see is a app with several tables that involves calculations like the one above then updates to tables.
@Louis, Jake:
We appreciate your feedback. Actually, there is already a corresponding CC example for XAF. It's also described in the product's documentation: documentation.devexpress.com
So, instead of implementing the same code in an XAF application, I would recommend that you learn the concept of accomplishing such tasks. This will help you implement any calculation algorithm your business needs dictate.
Feel free to contact us via the Support Center, if you need any assistance with our products.
Please
or
to post comments. | https://community.devexpress.com/blogs/aspnet/archive/2010/07/16/how-to-calculate-australian-golf-handicap-using-xpo.aspx | CC-MAIN-2020-05 | refinedweb | 677 | 52.15 |
I have been working on a scenario where a WCF service hosted on IIS 7, communicates to the SharePoint 2010 server object model for performing List operations. Although SharePoint 2010 has provided in-built WCF services for working with List etc, but one my students who is working as a SharePoint 2010 developer, was having the following idea: In the diagram, I have tried to explain the type of the application he wants to develop. The application scenario of the diagram is as shown below:
The requirement behind creating a WCF service here is that the client application need not to be directly connected to the SharePoint 2010 Services. Instead an end-user carrying his laptop, will use a Desktop application to perform SharePoint List updates using WCF service. To develop such a scenario, I have used Windows 7 Enterprise N 64 bit OS with IIS 7.5. I have SharePoint 2010 installed on the same machine. Here I am assuming that you are using SharePoint 2010 and know how to create a Web Site collection and List etc. For the above scenario, I have a Web Site collection which contains the ‘SalesInfo’ List as shown below. In the application we will develop, I have used the following classes:Step 1: Open VS2010 and create a blank solution, name it as ‘SPS_2010_List_Custom_WCF’. In this solution add a new WCF service application (targeting to .NET 3.5), name it as ‘WCF_SPS_2010_List_Service’. In this project, add a reference to the ‘Microsoft.SharePoint.dll’. This component is available in the following path:C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI\Step 2: Rename IService1.cs to IService.cs and rename Service1.svc to Service.svc. Write the below code in IService.cs:Step 3: Make sure that Service.svc markup is as below: <%@ ServiceHost Service="WCF_SPS_2010_List_Service.Service" Language="C#" Debug="true" CodeBehind="Service.svc.cs" %>Step 4: Open web.config file and set the EndPoint to BasicHttpBinding as below: Step 5: Open the Service.svc.cs and write the following code: The above code makes use of the SharePoint 2010 Server object model. It opens the SharePoint 2010 site and program sagainst the List with the name ‘SalesInfo’. Please read the comments carefully for the ‘CreateSalesRecord()’ method. Step 6: Debug the above WCF service targeting the ‘x64’ platform because SharePoint 2010 requires 64-bit support to execute. Step 7: Publish the WCF service on IIS 7.5 and make sure that the ApplicationPool under which the applicaton is published, targets to .Net 2.0 (this also supports .NET 3.5). Also configure the Identity of the ApplicationPool to ‘LocalSystem’ as shown below: Step 8: Now publish the WCF Service by creating Application under this app pool. Step 9: In the same solution, add a new WPF application, name it as ‘WPF_ClientWCF_SPS’. In this application, add the service reference of the WCF service. Name the namespace as ‘MyRef’. Step 10: Add 3 TextBoxes, 3 TextBlocks and a Button on the MainWindow.xaml as shown below: Step 11: In the ‘Save Product’ button click event, write the following code. This code makes a proxy object of the WCF service and makes a call to the CreateSalesRecord() method by passing the SharePoint 2010 site Url and the SalesInfo object to it. Step 12: Run the application, Enter data in the TextBoxes and click on the ‘Save Product’ button, the result will be as shown below: You can check the newly added entry in the List by refreshing the SharePoint 2010 site, as shown below: The Need for SPUserToken.SystemAccount Open the Service.svc.cs file and change the following code from: toNow publish the WCF service on IIS and Update the WCF service reference in the client application, put breakpoints on the ‘Save Product’ button click and the ‘CreateSalesRecord()’ and run the application. For debugging the WCF Service hosted on IIS, attach the w3wp.exe process. During debugging, the following exception will be displayed: “Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))” Here the code will crash because the user object received from the client to the WCF service, will not be able to authorize itself against the SharePoint 2010 Server Object Model. Conclusion: We have seen how to use a custom WCF service to isolate the core SharePoint 2010 services from the direct access of the remote client application. The WCF Service acts as a successful interface between the remote user client and SharePoint 2010. The entire source code of this article can be downloaded over here | http://www.dotnetcurry.com/showarticle.aspx?ID=760 | CC-MAIN-2015-18 | refinedweb | 756 | 63.8 |
I am trying to solve a problem with many variables using scipy and linear programming. I have a set of variables X which are real numbers between 0.5 and 3 and I have to solve the following equations :
346 <= x0*C0 + x1*C1 + x2*C2 +......xN*CN <= 468
25 <= x0*p0 + x1*p1 + x2*p2 +......xN*pN <= 33
12 <= x0*c0 + x1*c1 + x2*c2 +......xN*cN <= 17
22 <= x0*f0 + x1*f1 + x2*f2 +......xN*fN <= 30
import numpy as np
from scipy.optimize import linprog
from numpy.linalg import solve
A_ub = np.array([
]])
b_ub = np.array([468, 33, 17, 30, -346, -25, -12, -22])
c = np.array([34, 56, 32, 21, 24, 16, 19, 22, 30, 27, 40, 33])
res = linprog(c, A_eq=None, b_eq=None, A_ub=A_ub, b_ub=b_ub, bounds=(0.5, 3))
[-34, -56, -32, -21, -24, -16, -19, -22, -30, -27, -40, -33]
-346 > -(x0*C0 + x1*C1 + x2*C2 +......xN*CN)
0.425
res.fun
nan
res.x
x0*C0 > x1*C1 > x2*C2 >...> xK*CK >...xN*CN > 0
x0*C0 >= 33/2
This system of inequalities is not feasible: there is no solution that satisfies all constraints. You can see that from
res:
fun: 0.42500000000000243 message: 'Optimization failed. Unable to find a feasible starting point.' nit: 28 status: 2 success: False x: nan
I believe this is a correct result (I verified this with another LP system).
Note: if you change the bounds to
(0,3), you will get a feasible solution. | https://codedump.io/share/ird9QnMMppNd/1/solving-multiple-equations-with-many-variables-and-inequality-constraints | CC-MAIN-2017-43 | refinedweb | 251 | 87.01 |
I'm only trying to print out the elements of this array out into a table. I'm going to do more formatting on it in a moment, but I can't seem to get a temporary variable to work in the printf.
for (int count1 = 0; count1 < 4; count1++)
{
for (int count2 = 0; count2 < 4; count2++);
{}
System.out.printf( "%10.2f ",
sales[count2][count1]);
}
The printf statement is telling me that count2 needs to be declared before it can be used.
Any help with the probably stupid question would be awesome.
Code:public class EmpSales { public void SalesData() { double data[][] = new double[][] { {101,1,152.34}, {101,2,23.45}, {101,3,12.45}, {101,4,76.34}, {101,5,12.45}, {102,1,65.23}, {102,2,12.34}, {102,3,87.23}, {102,4,123.45}, {102,5,65.34}, {103,1,150.5}, {103,2,200}, {103,3,127}, {103,4,32.45}, {103,5,195.86}, {104,1,72.85}, {104,2,41.18}, {104,3,12.34}, {104,4,43.56}, {104,5,77.77}, {101,1,234.34}, {101,2,286}, {101,3,154}, {101,4,350.23}, {102,1,244}, {102,3,247}, {102,5,311}, {103,2,86.12}, {103,3,238}, {103,4,278}, {103,5,165.65}, {104,1,85}, {104,2,148}, {101,5,199.12}, {102,1,277}, {102,2,302.76}, {102,3,119}, {102,4,122}, {102,5,171.77}, {103,5,77.65}, {101,1,56}, {101,2,273}, {101,3,69.12}, {101,4,70}, {101,5,102}, {102,3,200}, {102,4,112.12}, {102,5,219}, {102,2,207}, {103,1,112.89}, {103,3,339} }; int row = 0, column; double saleAmount; double sales[][] = new double [5][4]; { for (int i = 0; i<51; i++) { row = (int) data[i][1]-1; column = (int) (data[i][0]-101); saleAmount = data[i][2]; sales[row][column] += saleAmount; } } for (int count1 = 0; count1 < 4; count1++) { for (int count2 = 0; count2 < 4; count2++); { System.out.printf( "%10.2f ", sales[count2][count1]); } } } } | http://forums.devshed.com/java-help-9/simple-error-printf-ignoring-variable-941324.html | CC-MAIN-2016-22 | refinedweb | 346 | 85.59 |
odbx_lo_read man page
odbx_lo_read — Reads content from a large object
Synopsis
#include <opendbx/api.h>
ssize_t odbx_lo_read
(odbx_lo_t* lo, void* buffer, size_t buflen);
Description
To get the content of a large object, odbx_lo_read() fetches the data in one or more pieces from the server and stores it into the user supplied
buffer. After opening the large object using odbx_lo_open(), the first call to odbx_lo_read() will return the bytes from the beginning. The second and all other calls will store subsequent parts of the large object content into the
buffer until the end of the data is reached. To reread the content a second time, you have to close the large object handle and reopen it again as some databases provide no way to reposition the internal file position indicator for the stream.
The
lo parameter has to be the large object handle created and returned by odbx_lo_open() via its second parameter. It becomes invalid after it was supplied to odbx_lo_close() and this function will return an error in this case. The large object content fetched from the server is stored into the user supplied
buffer up to
buflen bytes.
Return Value
odbx_lo_read() returns the number of bytes placed into
buffer, which may be up to
buflen bytes. If the end of the content is reached and no more data is available, the return value will be 0. read from the large object
- -
ODBX_ERR_HANDLE
lois NULL or the supplied large object handle is invalid
See Also
odbx_lo_open(), odbx_lo_close() | https://www.mankier.com/3/odbx_lo_read | CC-MAIN-2017-26 | refinedweb | 247 | 65.76 |
LoPy - Library for DHT22
Dear All,
Does anybody know if there is a working library for the DHT22 temperature and humidity sensor?
Thanks and regards.
@Jurassic-Pork thanks! Will give it a try.
Best regards.
- Jurassic Pork
hello,
there is my Pure Python library for reading DHT sensor on Pycom boards.
Since the firmware version release v1.7.7.b1 there is a new function pulses_get that have improved my DHT library.
the old version is here
the new version with pulses_get used is here
if you have sometimes some errors (wrong number of bits read) i can improve the library with
this code (retry) in my library :
def read(self): # pull down to low r=0 #loop for retry if needed while True: self.__send_and_sleep(0, 0.019) data = pycom.pulses_get(self.__pin,100) self.__pin(1) self.__pin.init(Pin.OPEN_DRAIN) #print(data) bits = [] for a,b in data: if a ==1 and 18 <= b <= 28: bits.append(0) if a ==1 and 65 <= b <= 75: bits.append(1) #print("longueur bits : %d " % len(bits)) if len(bits) != 40: print("length error : %d " % len(bits),end='') #print(data) r +=1 print("-> retry %d" % r) # return error if too much retries if r==3: return DTHResult(DTHResult.ERR_MISSING_DATA, 0, 0) time.sleep_ms(200) else: break
friendly J.P
@fuzzy Yes, that#s possible, but difficult. There is a DHT module in the micropython.org variant. One could consider to carry that over, but it is a little bit tricky, since the underlying architecture of the SW is totally different, and it is not just copying the interface part.
@robert-hh thanks! I will have a look at the SHT31. Just out of curiosity: is there a way to make a native ESP32 function to read fast the transitions and then make it available (somehow) as a micro python module?
Thanks!
@fuzzy The problem with the DHT22 and DHT11 is, that they require fast reading of the transitions. For the usual implementation of setting up an IRQ handler Python code is too slow. That's the reason why the actual code first reads in a burst of data and then tries to decode the 1/0 transitions. That fails sometimes. For the moment, you can simply retry if you get an checksum error.
Alternatively, you could used sensors with an I2C or SPI interface, for which there is better support by XxPy devices, like SHT31. @maamar has also implemented code for that device
@robert-hh thanks (once more...!) Do you know of any other components from which I could get reliable temperature and humidity reading using the LoPy? Or you think that detecting and ignoring the checksum errors from reading the DHT22 suffice for most (non-critical) applications?
Thanks and regards.
@fuzzy There is a lengthy thread about this with some sample code:
One of the samples is one of mine, shown below:
from machine import enable_irq, disable_irq, Pin import time _LIMIT = const(8) _BUFFERSIZE = const(1000) def getval(pin) : ms = [1] * _BUFFERSIZE mslen = len(ms) pin(0) time.sleep_us(20000) pin(1) irqf = disable_irq() for _ in range(mslen): ms[_] = Exception as err: print (err, ix, len(bits)) return([0xff,0xff,0xff,0xff]) for i in range(len(res)): for v in bits[i*8:(i+1)*8]: #process next 8 bit res[i] = res[i]<<1 ##shift byte one place to left if v > _LIMIT: res[i] = res[i]+1 ##and add 1 if lsb is 1 if (res[0]+res[1]+res[2]+res[3])&0xff != res[4] : ##parity error! print("Checksum def run(): dht_pin=Pin('G7', mode=Pin.OPEN_DRAIN) dht_pin(1) temp, hum = DHT22(dht_pin) temp_str = '{}.{}'.format(temp//10,temp%10) hum_str = '{}.{}'.format(hum//10,hum%10) # Print or upload it print(temp_str, hum_str) run()
I did NOT test that recently again. It supports both DHT22 and DHT11
Edit: Just tested it again and it seems to work as good and bad as before. Sometimes it returns checksum error, in which case the results are not valid. | https://forum.pycom.io/topic/1809/lopy-library-for-dht22 | CC-MAIN-2018-17 | refinedweb | 672 | 72.66 |
full. To completely erase and reformat the NVS memory used by Preferences, create a sketch that contains:
#include <nvs_flash.h> void setup() { nvs_flash_erase(); // erase the NVS partition and... nvs_flash_init(); // initialize the NVS partition. while(true); } void loop() { }
You should download a new sketch to your board immediately after running the above, or changes.
/*:
77
how much should be the size of the data for storing user preferences? How many variables can be stored? How many times can I store (write) the data using preference??
Thanks for the extra info Carl!
I invite you to take a look at an article I wrote a while back about the new logging API at..
[UPDATE]
Please check comment:.
Hey, quick note to say I just read through your long comment and found it very useful. Basically most of the way through reading the article I was thinking “yeh this is great but it’s a bit simple – I’ll definitely need to have some way of writing initial factory default values to memory, which also means I will need to create a bool flag called firstRun or somesuch”. Not difficult in any way but was really useful to see all your detailed comments. Must have taken a good couple of hours writing that comment – it definitely helps me as I’m writing my first firmware for an ESP32 and trying to figure these things out. Cheers!
Mat, glad it was of use. Was also writing my first Arduino sketch at the time and figured this might help others fill a few gaps I found in the docs. All the best.
Sara: I wrote a lengthy [comment] () a while back to the [ESP32 Preferences] () tutorial. Since then I’ve found that some of the information in that post is quite incorrect — mostly in the way I described how keys within a namespace are created.
I’ve now corrected all this and have had published in the official arduino-esp32 project documentation the [Preferences API] () and a [Tutorial] () to match.
If it doesn’t cause too much trouble, could you flag my original post on your site with a disclaimer (or delete it completely in lieu of this note) and point to the official documents instead.
Thanks again for a great site and all the work you put into it.
Hi.
Thanks for sharing your documentation.
I’ve updated your previous comment.
Regards,
Sara
For those who are stock with ESP8266 (like me!) I have found a library which passed the simple read and write test on my board. Here is the URL:
You need to add the extracted zip file to Ardiono IDE’s “libraries” directory.….
Wow just wow great tutorial. i am trying to get the following to work but to no avail. I’m new to Arduino esp32. i am trying to save the state of 7 GPIOs every time getOutputStates() is called
I am having problems getting the following line to work and not sure how to get it going
putInt(String(outputGPIOs[i]), int32_t String(digitalRead(outputGPIOs[i])));])));
putInt(String(outputGPIOs[i]), int32_t String(digitalRead(outputGPIOs[i])));
//preferences.putBool("state", String(digitalRead(outputGPIOs[i])));
}
i get the following error:
exit status 1
invalid conversion from ‘int’ to ‘const char*’ [-fpermissive]
Thank you
i noticed i cut off preferences. on a couple of lines:])));
preferences.putInt(int(outputGPIOs[i]), int32_t (digitalRead(outputGPIOs[i])));
//preferences.putBool("state", String(digitalRead(outputGPIOs[i])));
}
Hi Mark,
I had the same struggles composing the complex statements. I wanted to build a 30 day averaging array and store an incrementing index value, last sensor read value and the array[30]. The reason to work this way is the use of “Deep Sleep” mode.
Being a self learning hacker (no formal training) and tinkerer I did a fair bit of research and realized it would be far better to use “SPIFFS” with an abstract layer helper library and stay away from the use of “JSONVar” which can lead to memory leakage issues (or so I read)
I created an example that uses “daily” random values simulating 22500lts as a full water tank and each reading reduces by what would be a “daily use” and is put into the array[10] to get the average use value. Obviously the array size can be anything you want. Example sketch here
Note: Use the “SPIFFS” format sketch on this site to start over and I discovered even with this you’ll get an incorrect first value so I used the “clear()” class to get round that. You can use “esptool.py” and do an “erase_flash” which will zero the flash but you must do a “SPIFFS format” before loading a new sketch or it will error.
Hope this helps
Cheers
Ralph
Ralph,
Thanks for the pointers. i think i have figured out. the “key” in preferences has to be char can not be any other type.
I am using SPIFFS for the web stuff to turn on and off the relays manually and set the schedule on and off times for each relay using an RTC.
I’m not concerned about memory leakage as the eps will be reset every night at midnight. now that i have the preferences working.
Thanks
Mark
Could this library work on a sensor DataLogger? Saving the data given from sensor in the flash memory…
Hi.
Yes, it would work.
But, I recommend using a microSD card instead.
You can use this tutorial as a reference:
Regards,
Sara
Hi, Im looking for some guidance on getting the preferences library to store the mqtt server for the arduino PubSubClient. I can store wifi ssid, password, mqtt username and password, but I cant figure out how to get the server name stored. Tried as a string value and char value in the preferences library. Anyone out there have a suggestion? i an getting the name of the server (likely the IP address) from a text box on a nextion display then hoping to use the settings to connect to mqtt server. Ive tried converting the string value to a char, etc.. c_str() just cant seem to figure out how to get it to work!
in my setup i try to get the server name like so:
(THIS IS JUST THE LASTEST TEST – THIS CODE HAS BEEN WORKED OVER MANY TIMES TRYING TO USE CHAR RATHER THAN STRING, ETC..)
String mqttserver=”192.168.1.45″;
String mqttuser= “admin”;
String mqttpass = “123456”;
PubSubClient client(mqttserver, 1883, callback, wifiClient); //THIS FAILS
//PubSubClient client(“192.168.0.45″, 1883, callback, wifiClient); //THIS WORKS FINE
String Wifissid=”Mango-2.4″;
String Wifipass=”32012345678”;
setup() {
preferences.begin(“myapp”, false);
preferences.putString(“mqttserver”, mqttserver);
preferences.putString(“mqttuser”, mqttuser);
preferences.putString(“mqttpass”, mqttpass);
preferences.putString(“wifissid”, Wifissid);
preferences.putString(“wifipass”, Wifipass);
WiFi.disconnect();
String p_ssid = preferences.getString(“wifissid”,””);
String p_pass = preferences.getString(“wifipass”, “”);
WiFi.begin(p_ssid.c_str(),p_pass.c_str());
Serial.print(“Connecting to WiFi ..”);
uint32_t moment=millis();
while ((WiFi.status()!=WL_CONNECTED) && (millis()-moment<8000)) { // make sure to do a timeout in
// case of incorrect details to prevent eternal loop
Serial.print(“.”);
delay(100); // or yield(); (delay() includes a yield();)
}
if (WiFi.status() == WL_CONNECTED) {
Serial.println(‘Connected’);
Serial.println(WiFi.localIP());
Serial.println(mqttserver);
Serial.println(mqttuser.c_str());
Serial.println(mqttpass.c_str());
//PubSubClient client(*mqttserver, 1883, callback, wifiClient);
}
//TRYING TO OVERRIDE THE INITIAL DEFINITION HERE
//const char *mqserver = preferences.getString(“mqttserver”,””).c_str();
// client.setServer(“192.168.0.45”,1883);
String mqs = preferences.getString(“mqttserver”,””);
client.setServer(mqs, 1883);
}
Hi.
Can you better describe what is exactly the error that you get?
Regards,
Sara
ola, estou tentando atualizar uma chave no preferences pelo webserver. Porem nao consigo fazer o putString dentro do server.on, quando faço a chamada via GET.
//– ATIVA/DESATIVA (MR1)
server.on(“/ativaMR1”, HTTP_GET, [](AsyncWebServerRequest *request){
request->send(200, “text/html”);
jso.putString(“teste”, jesio);
Serial.println();
Serial.println(“SOLICITAÇÃO OK: ATIVAR (MR1)”);
Serial.println();
});
Pode me ajudar?
att
Jesio,
You need to read up on the difference between HTTP_GET and HTTP_POST
From your HTML you send data to to be processed by the ESP with a HTTP_POST command and if you want to bring data into your HTML you use the HTTP_GET.
Might be worth having a look at an example like this
Cheers
Hi.
You have to run that command outside of the server.on function.
Instead, you should have a flag variable that will change its value when it receives that request.
Then, in the loop(), check that the value of that variable, and if that’s the case, call the putString function.
I hope this helps.
Regards,
Sara
Hello,
I have a problem with preferences.
I am making an esp ap, with an web interface to change some settiings.
One of the settings is to change the wifi channel.
To change the channel I have a dropdown on the interface which changes the index of the url (like: 192.168.4.1/Channel/1)(which works). Then I am getting the index and comparing them in a function and if it’s a special index the preferences get changed with a value
(like: if (header.indexOf(“GET /Channel/1”) >= 0) { preferences.putInt(“channel”, 1); }) (this works either). But if the value is 10 and over it just takes the 1 and saves it.
The “channel” preferences is an integer.
I hope this should be enough and it doesn’t sound too confusing.
best regards
Andreas
Hi.
The issue that you’re facing is normal because the indexOf locates a string within another string. So, it will find the “1” if you have “10” “11” “1111” and so on.
So, the best way to do this is to get the number after the last “/”.
First, locate the last “/” position—you can use the lastIndexOf() function:
int lastSlash = header.lastIndexOf(“/”);
Then, calculate the string size:
int stringLen = header.length()
And then cut the string to get the number:
channel = header.subString(lastSlash+1, stringLen);
I hope this helps.
Regards,
Sara
Heii,
I’ve tried what you said put I don’t really understand how this works or should work.
I used indexOf before even with numbers higher than 9 and it worked.
I don’t want to get the number after the last slash, I just want to check if the header index matches what I have in the if.
Example:
if (header.indexOf(“GET /ChannelSet/1”) >= 0)
{
preferences.putInt(“channel”, 1);
}
else if (header.indexOf(“GET /ChannelSet/2”) >= 0)
{
preferences.putInt(“channel”, 2);
}
…
else if (header.indexOf(“GET /ChannelSet/13”) >= 0)
{
preferences.putInt(“channel”, 13);
}
(I know I could make this somehow shorter but it just has to work, memory and speed is not important, and I’m kinda lazy)
I used the same for signal strength with numbers like 19, 17 and it works the only difference is the type. I save it as an string (preferences.putString(“outputStateStr”, “18”)) instead of an integer.
Maybe I don’t understand how the indexOf works, but I don’t know what I’m doing wrong or right.
Sry if this is annoying.
Regards,
Andreas
Thank you for this great article comming from PIC with eeprom,
I have one question regarding the “preferences.end” statement. For me it is not clear when to use that statement. Even with the help of de posting Xylopyrographer did on March 4, i am not sure when to use it.
Thank you for your time,
Eric
Hi Eric. Best answer is, every time your app finishes writing or reading data using the Preferences library, it should use a “preferences.end” to safely close the NVS and to release the preferences object. This ensures the NVS is in a good state and it also releases system memory back to the pool. So sequence would be preferences.begin –> put or get your data –> preferences.end
Hi, I would ti save data received over BLE ti memory e then send this data over WiFi, how can I do It?
I want to comment on a – in my opinion very bad habit regarding naming objects –
I think it’s a really bad habit to name an object with a similar name than the library-filename.
In this tutorial it is
Preferences preferences;
Sure the library’s filename starts with a capital letter “P” and the object’s name starts with lowcase “p” in my opinion this difference is too small to demnstrate which names are fixed and which names are user-definable
In my opinion the name of the object should reflect its nature
I would prefer
Preferences myPrefObj;
Where “my” indicates it IS user-definable
“Pref” gives the hint what it is
and “Obj” that it is an “object” that has functions which can be accessed by
writing “objectName.functionName”
“objectName.functionName”
the objects name dot the functions name
My naming-convention differs from the usual but
every advanced user and of course every expert will understand my naming instantly. The difference is that BEGINNERS will understand MORE.
If advanced users and experts understand it immediately and BEGINNERS understand more what arguments are left against such a naming-convention?
NONE! It is just a bad habit nobody things about.
Hi.
Yes. You are right.
I used the convention on the Preferences library example.
Regards,
Sara
Hello, I will transfer files from a client to ESP32. After the file is transferred, I want to print it to the sd card, how can I do it?
First get your data over to the ESP32.
Then in Arduino IDE, checkout Examples->SD(esp32)->SDTest.
Use
n = SDFile.read(byte *buffer, BUFSIZE);
to read,
n = SDFile.write(byte *buffer, BUFSIZE);
to write, and
SDFile.seek(a)
to set the read/write address in the file.
Hello
Great article. I am using a NodeMCU-32S board and the preference lab doesn’t work for me.
The counter doesn’t increase when the board resets. If the problem is with my board could you recommend an ESP32 dev board that you would guarantee me that I could read/write to the ESP32 EEPROM on that board.
Thanks
Hi.
Do you get any errors?
Do you have another ESP32 board to experiment with?
Regards.
Sara
If running battery power only, does using preferences have a significant impact on power consumption? Ideally I would like to save some key parameters in preferences for if/when the battery dies, but not at the expense of consuming more power in the process so that the battery drains even faster!
Hello
Great article.
I had read on web “Hoever, the good news is that the EEPROM.write() on the ESP32 has the same properties of update. It only writes to EEPROM if we want to write something different.
Now using the prefences insted of EEPROM library do the preferences.putUInt the same thing? For example, if the new value is equal to the value stored in ESP32 flash memory, then the preferences.putint counts as a write cycle or not.
Thanks and sorry for my bad english
Hi Sara
Great article
I read in one of yuor article “Contrary to the Arduino, the ESP32 doesn’t have an EEPROM.update() function.
however, the good news is that the EEPROM.write() on the ESP32 has the same properties of update. It only writes to EEPROM if we want to write something different.”
Does the preferences.put command behave the same? So it only writes in the ESP32 flash memory if the new value is not equal to the already stored value. So no write cycle ?
Thank
Reinhold
Hey, how can I iterate the namespaces? I would like to utilise code in a way that creates separate namespaces like np1, np2, np3 and so on maybe using a loop?
I would like to create separate namespaces in order to store and manage these namespaces later on….
Here’s the code I have tried. Feel free to let me know if there is any other way to do it:
#include<Preferences.h>
Preferences ok;
int i = 0;
void setup() {
Serial.begin(115200);
}
void loop() {
while (i < 289) {
char testarr[] = “test 1”;
Serial.println(testarr);
ok.begin(testarr, false);
testarr[5] = int(i);
Serial.println(testarr);
ok.putString("rtc", "09/09/2022 09:09:09");
ok.end();
}
while (i < 289) {
char testarr[] = “test 1”;
Serial.println(testarr);
ok.begin(testarr, false);
testarr[5] = int(i);
Serial.println(testarr);
String eh = ok.getString("rtc");
Serial.println(eh);
ok.end();
}
}
Feel free to correct me as well.
It appears that Preferences does not work on the ESP32-C3 development boards (I’m using an Adafruit ESP32-C3 QT-Py). the StartCounter example does not work – the countr does not increment.
ESP-ROM:esp32c3-api1-20210207…
Does any know what the issue is with boards based on these chips? Id there an alternative storage method/library I can use in the meantime? | https://randomnerdtutorials.com/esp32-save-data-permanently-preferences/?replytocom=733681 | CC-MAIN-2022-27 | refinedweb | 2,803 | 66.54 |
shutdown - shut down part of a full-duplex connection
Synopsis
Description
Errors
Notes
Bugs
Colophon
#include <sys/socket.h>
int shutdown(int sockfd, int how);.
On success, zero is returned. On error, -1 is returned, and errno is set appropriately.
POSIX.1-2001, 4.4BSD (the shutdown() function call first appeared in 4.2BSD).
The constants SHUT_RD, SHUT_WR, SHUT_RDWR have the value 0, 1, 2, respectively, and are defined in <sys/socket.h> since glibc-2.1.91.
As currently implemented, checks for the validity of how are done in domain-specific code, and not all domains perform these checks. Most notably, UNIX domain sockets simply ignore invalid values; this may change in the future.
connect(2), socket(2), socket(7)
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.sgvulcan.com/shutdown.2.php | CC-MAIN-2018-17 | refinedweb | 149 | 60.51 |
/* * .h 8.1 (Berkeley) 6/2/93 */ #ifndef _KVM_H_ #define _KVM_H_ /* Default version symbol. */ #define VRS_SYM "_version" #define VRS_KEY "VERSION" #include <nlist.h> #include <sys/cdefs.h> __BEGIN_DECLS typedef struct __kvm kvm_t; struct kinfo_proc; int kvm_close(kvm_t *); char **kvm_getargv(kvm_t *, const struct kinfo_proc *, int); char **kvm_getenvv(kvm_t *, const struct kinfo_proc *, int); char *kvm_geterr(kvm_t *); int kvm_getloadavg(kvm_t *, double [], int); char *kvm_getfiles(kvm_t *, int, int, int *); struct kinfo_proc * kvm_getprocs(kvm_t *, int, int, int *); int kvm_nlist(kvm_t *, struct nlist *); kvm_t *kvm_open (const char *, const char *, const char *, int, const char *); kvm_t *kvm_openfiles (const char *, const char *, const char *, int, char *); int kvm_read(kvm_t *, unsigned long, void *, unsigned int); int kvm_write(kvm_t *, unsigned long, const void *, unsigned int); __END_DECLS #endif /* !_KVM_H_ */ | http://opensource.apple.com/source/Libc/Libc-763.12/include/kvm.h | CC-MAIN-2016-18 | refinedweb | 118 | 54.05 |
#include <hallo.h> Josip Rodin wrote on Fri Nov 03, 2000 um 06:27:56PM: > > IMHO it's not too long, just one letter longer then "Programming". > > Programming is too long, too. :) > > > What alternatives do we have? "Filebrowser"? Of simply "Filesystem"? > > "Fileadmin"? > > Filesystem, or Disk, or Browser... Ok, lets take "Disk", so we need only four bytes ;) Nobody against subsection? Then I will suggest to take it into the Preferred menu structure (Cc' to Joost Witteveen). Apps/Disk: - tools to browse the file system (e.g. filemanagers) ** Microsoft: Where do you want to go today? Linux: Where do you want to go tomorrow? BSD: Are you guys coming or what? | https://lists.debian.org/debian-devel/2000/11/msg00235.html | CC-MAIN-2017-13 | refinedweb | 110 | 78.75 |
5.1.2. GPIO Control¶
Your OpenMV Cam has between 9 (OpenMV Cam M4) to 10 (OpenMV Cam M7) general purpose I/O pins onboard for talking to the real-world. We’re probably going to keep the pin count this low moving into the future to keep the OpenMV Cam tiny.
Anyway, there are a few ways to use GPIO pins.
5.1.2.1. As an Input¶
To use a GPIO pin as an input just do:
import pyb p = pyb.Pin("P0", pyb.Pin.IN) p.value() # Returns 0 or 1.
The
pyb.Pin() constructor creates a pin object which you will use to control
the I/O pin on your OpenMV Cam. The string you pass to the OpenMV Cam should be
P and then
0-8 for the OpenMV Cam M4 and
0-9 for the OpenMV Cam M7.
Once you’ve created the GPIO pin use the
pyb.Pin.value() method to get the
state of the IO pin.
Finally, if you need to pull-up or pull-down the IO pin pass
pyb.Pin.PULL_UP
or
pyb.Pin.PULL_DOWN as additional arguments to
pyb.Pin() constructor:
p = pyb.Pin("P0", pyb.Pin.IN, pyb.Pin.PULL_UP)
5.1.2.2. As an Output¶
Now, to use a GPIO pin as an output just do:
import pyb p = pyb.Pin("P0", pyb.Pin.OUT_PP) p.high() # or p.value(1) to make the pin high (3.3V) p.low() # or p.value(0) to make the pin low (0V)
It’s that easy! However, what if you want to open-drain an output? Just do:
p = pyb.Pin("P0", pyb.Pin.OUT_OD)
And now
pyb.Pin.high() will cause the pin to float while
pyb.Pin.low() will
pull the pin low. If you need a pull-up resistor on the Pin just add:
p = pyb.Pin("P0", pyb.Pin.OUT_OD, pyb.Pin.PULL_UP)
... to the constructor. | http://docs.openmv.io/openmvcam/tutorial/gpio_control.html | CC-MAIN-2018-22 | refinedweb | 323 | 86.3 |
Capital Budgeting. Chapter 8. The Capital Budgeting Decision Process. The capital budgeting process involves three basic steps:. Generating long-term investment proposals; Reviewing, analyzing, and selecting from the proposals that have been capital budgeting process involves three basic steps:
Accounting rate of return (ARR):focuses on project’s impact on accounting profits
Net present value(NPV):best technique theoretically; difficult to calculate realistically
Internal rate of return(IRR): widely used with strong intuitive appeal
Profitability index(PI):related to NPV
Account for the time value of money;
Account for risk;
Focus on cash flow;
Rank competing projects appropriately, and
Lead to investment decisions that maximize shareholders’ wealth.
depreciation
Average annual operating cash inflows
=
Average profits
after taxes
–Accounting Rate Of Return (ARR)
Can be computed from available accounting data
ARR uses accounting numbers, not cash flows;
no time value of money.
The payback period is the amount of time required for the firm to recover its initial investment.
Management determines maximum acceptable payback period.
Advantages of payback method:
Disadvantages of payback method:
Reject (46.3<50)Discounted Payback
NPV: The sum of the present values of a project’s cash inflows and outflows.
Discounting cash flows accounts for the time value of money.
Choosing the appropriate discount rate accounts for risk.
Accept projects if NPV > 0.
A key input in NPV analysis is the discount rate.
Western Europe project: NPV = $75.3 million
Southeast U.S. project: NPV = $25.7 millionNPV Analysis for Global Wireless
Should Global Wireless invest in one project or both?
Key benefits of using NPV as decision rule:
Though best measure, NPV has some drawbacks:
NPV is the “gold standard” of investment decision rules.
IRR: the discount rate that results in a zero NPV for a project.
The IRR decision rule for an investing project is:
Western Europe project: IRR (rWE) = 27.8%
Southeast U.S. project: IRR (rSE) = 36.7%IRR Analysis for Global Wireless
Global Wireless will accept all projects with at least 18% IRR.
Disadvantages of IRR:
Sometimes projects do not have a real IRR solution.
Modify Global Wireless’s Western Europe project to include a large negative outflow (-$355 million) in year 6.
Project is a bad idea based on NPV. At r =18%, project has negative NPV, so reject!
IRR
NPV (18%)
Western Europe
27.8%
$75.3 mn
Southeast U.S.
36.7%
$25.7 mnConflicts Between NPV and IRR:The Scale Problem
NPV and IRR do not always agreewhen ranking competing projects.
The scale problem:
Why the conflict?
Because of the differences in the timing of the two projects’ cash flows, the NPV for the Product Development proposal at 10% exceeds the NPV for the Marketing Campaign.
PV of CF (yrs1-5)
Initial Outlay
PI
Western Europe
$325.3 million
$250 million
1.3
Southeast U.S.
$75.7 million
$50 million
1.5Profitability Index
Calculated by dividing the PV of a project’s cash inflows by the PV of its initial cash outflows.
Decision rule: Accept project with PI > 1.0, equal to NPV > 0
Like IRR, PI suffers from the scale problem.
Methods to generate, review, analyze, select, and implement long-term investment proposals: | http://www.slideserve.com/varen/capital-budgeting | CC-MAIN-2017-30 | refinedweb | 527 | 58.69 |
SoCamera.3iv man page
SoCamera — abstract base class for camera nodes
Inherits from
SoBase > SoFieldContainer > SoNode > SoCamera
Synopsis
#include <Inventor/nodes/SoCamera.h>
#define SO_ASPECT_SQUARE 1.00
#define SO_ASPECT_VIDEO 1.333333333
#define SO_ASPECT_35mm_ACADEMY 1.371
#define SO_ASPECT_16mm 1.369
#define SO_ASPECT_35mm_FULL 1.33333
#define SO_ASPECT_70mm 2.287
#define SO_ASPECT_CINEMASCOPE 2.35
#define SO_ASPECT_HDTV 1.777777777
#define SO_ASPECT_PANAVISION 2.361
#define SO_ASPECT_35mm 1.5
#define SO_ASPECT_VISTAVISION 2.301
enum ViewportMapping {
SoCamera::CROP_VIEWPORT_FILL_FRAME
Crops the viewport within the current window, so that the aspect ratio matches that of the camera. As the window size changes, the aspect ratio remains unchanged. The cropped region is drawn as a filled gray area.
SoCamera::CROP_VIEWPORT_LINE_FRAME
Crops the viewport, but draws a thin frame around the viewport
SoCamera::CROP_VIEWPORT_NO_FRAME
Crops the viewport, but gives no visual feedback as to the viewport dimensions within the window
SoCamera::ADJUST_CAMERA Adjusts the camera aspect ratio and height to make it fit within the given window. (The camera's fields are not affected, just the values sent to the graphics library.)
SoCamera::LEAVE_ALONE Do nothing. Camera image may become stretched out of proportion
}
Fields from class SoCamera:
Methods from class SoCamera:
void pointAt(const SbVec3f &targetPoint)
virtual void scaleHeight(float scaleFactor)
virtual SbViewVolume getViewVolume(float useAspectRatio = 0.0) const
void viewAll(SoNode *sceneRoot, const SbViewportRegion &vpRegion, float slack = 1.0)
void viewAll(SoPath *path, const SbViewportRegion &vpRegion, float slack = 1.0)
SbViewportRegion getViewportBounds(const SbViewportRegion ®ion) const.
Fields
SoSFEnum viewportMapping
Defines how to map the rendered image into the current viewport, when the aspect ratio of the camera differs from that of the viewport.
SoSFVec3f position
The location of the camera viewpoint.
SoSFRotation orientation
The orientation of the camera viewpoint, defined as a rotation of the viewing direction from its default (0,0,-1) vector.
SoSFFloat aspectRatio
The ratio of camera viewing width to height. This value must be greater than 0.0. There are several standard camera aspect ratios defined in SoCamera.h.
SoSFFloat nearDistance
SoSFFloat farDistance
The distance from the camera viewpoint to the near and far clipping planes.
SoSFFloat focalDistance
The distance from the viewpoint to the point of focus. This is typically ignored during rendering, but may be used by some viewers to define a point of interest.
Methods
void pointAt(const SbVec3f &targetPoint)
Sets the orientation of the camera so that it points toward the given target point while keeping the "up" direction of the camera parallel to the positive y-axis. If this is not possible, it uses the positive z-axis as "up."
virtual void scaleHeight(float scaleFactor)
Scales the height of the camera. Perspective cameras scale their heightAngle fields, and orthographic cameras scale their height fields.
virtual SbViewVolume getViewVolume(float useAspectRatio = 0.0) const
Returns a view volume structure, based on the camera's viewing parameters. If the useAspectRatio argument is not 0.0 (the default), the camera uses that ratio instead of the one it has.
void viewAll(SoNode *sceneRoot, const SbViewportRegion &vpRegion, float slack = 1.0)
void viewAll(SoPath *path, const SbViewportRegion &vpRegion, float slack = 1.0)
Sets the camera to view the scene rooted by the given node or defined by the given path. The near and far clipping planes will be positioned slack bounding sphere radii away from the bounding box's center. A value of 1.0 will make the clipping planes the tightest around the bounding sphere.
SbViewportRegion getViewportBounds(const SbViewportRegion ®ion) const
Returns the viewport region this camera would use to render into the given viewport region, accounting for cropping.
static SoType getClassTypeId()
Returns type identifier for this class.
File Format/Defaults
This is an abstract class. See the reference page of a derived class for the format and default values.
See Also
SoOrthographicCamera, SoPerspectiveCamera, SoCameraKit | https://www.mankier.com/3/SoCamera.3iv | CC-MAIN-2017-39 | refinedweb | 621 | 51.44 |
On 12.10.16 09:31, Nathaniel Smith wrote:
But amortized O(1) deletes from the front of bytearray are totally different, and more like amortized O(1) appends to list: there are important use cases[1] that simply cannot be implemented without some feature like this, and putting the implementation inside bytearray is straightforward, deterministic, and more efficiently than hacking together something on top. Python should just guarantee it, IMO.
Advertising-n [1] My use case is parsing HTTP out of a receive buffer. If deleting the first k bytes of an N byte buffer is O(N), then not only does parsing becomes O(N^2) in the worst case, but it's the sort of O(N^2) that random untrusted network clients can trigger at will to DoS your server.
Deleting from buffer can be avoided if pass the starting index together with the buffer. For example:Deleting from buffer can be avoided if pass the starting index together with the buffer. For example:
def read_line(buf: bytes, start: int) -> (bytes, int): try: end = buf.index(b'\r\n', start) except ValueError: return b'', start return buf[start:end], end+2 _______________________________________________ Python-Dev mailing list Python-Dev@python.org Unsubscribe: | https://www.mail-archive.com/python-dev@python.org/msg94364.html | CC-MAIN-2018-05 | refinedweb | 205 | 57.2 |
I have a scenario where I am transferring a file from desktop to removable media (with or without FRP encrypted/decrypted/installed) and am expecting DLP to block the transfer. DLP does throw up the block notification but it does not actually block the transfer.
The DLP event on the server shows that it did not block.
Actual Action: No Action
Expected Action: Block
Does anyone know how to determine why this happens?
Hello
that works as designed.
Block actually means delete afterwards.
File will be copied to the drive and after this action will happen, dlp can make action defined in the rule.
The thing is that the device must be reachable by the DLP. So in case that popup will appear and you will unplug the drive and after that click on block, then no action will happen because the file cannot be deleted by DLP, because the drive isnt connected.
There is no justification or user choice available. The rule action is simply "Block." I would expect it to be copied and immediately deleted or not copied at all.
The thing is that in background after successful copy you can see FCAG.exe consuming CPU what actually means that DLP is doing the classification of the files and after that will decide what to do with the file. During whole process the file must remain reachable for the DLP processes.
If you are using keywords in classification then it takes time until DLP will finish analysis of all files. Also in client configuration in section for removable media devices you can find timeout after which the analysis will end and allow the copy of the file. That you can see in DLP incident manager as time out.
Instead of using Data Protection Rules, Device Control is more useful where read only can be set for recognized removable media devices. MTP devices are excluded from this rule unfortunately (mobile phones, tablets...) for this devices only Data Protection rules can make something like read only access.
No action can also happen when user will terminate the copy.
Definitely the workflow is:
User copy data to USB, data are stored to USB, DLP will start analysis of the copied files, does signification and rules check. If the rule is matched then will do the configured action. During the whole operation files must be reachable for the DLP.
What you are saying does not mesh with what I am seeing. The file I am testing with is a 1kb text file with a trigger string in it. It's lightweight and quick. My guess is that I have something misconfigured.
I can't control by specific device due to the fact that I need to allow so many models of removable media. In fact that is one of the issues I have with DLP. The device control rules don't work with data control rules which is really what I need. I can already control device access via other means. What I need is to simply find data outbound to removable media and block/delete if a rule is triggered.
I think I will ask support.
As per my understanding, you have created DLP rule(Removable storage protection rule) & its configured in block mode with all file types in classification.
If yes, it should block the file transfer from Desktop to External device. but it is not blocking.
Expected action : (Block)action which is defined in rule
Actual action : action taken on endpoint while file copy.
I suspect, this is because of wrong configuration in Windows client configuration policy(Policy catalog --> Data loss prevention 11.x --> Windows client configuration policy). Check the below settings,
1. Check Data protection is enabled or not(Device control with full content tracking)
2. Check the analyzing time & action defined if time exceeds threshold in Removable media session.
Kindly let me know if you have any other queries...
Here are my settings:
Policy catalog --> Data loss prevention 11.2 --> Windows client configuration policy
Device Control: block and allow charge, enforce immediately
Operational Mode and Modules: Device Control and full content protection, everything checked except for discovery - email, and outlook section.
Removable Storage Protection: normal delete mode, file analysis max time = 30 seconds, if time exceeded = block.
Interesting logs (there is no StormShield):
c:\programdata\mcafee\dlp\temp\logs\session#1\HDLP_agent_(11.9.2019)(8.17-12).log
2019-09-11 13:00:19.728 [10992] [WARNING][AgentStormShieldService::isPrerequisitesInstalled(301)]> Failed to find StromShield Util path in the registry
2019-09-11 13:00:19.728 [10992] [ERROR][AgentStormShieldService::checkConnectedUser(184)]> StormShield product doesn't installed
2019-09-11 13:00:19.728 [10992] [ERROR][AgentStormShieldService::getFileInfo(109)]> Unable to get connected user
2019-09-11 13:00:19.728 [10992] [ERROR][AgentStormShieldService::isFileEncrypted(158)]> getFileInfo failed for file c:\users\<username>\desktop\dlptest.txt user
2019-09-11 13:00:20.332 [10992] [OERROR] [Monitoring Service] [EvidenceService::renameEvidenceFileToRepBufFile] Error rename file in repbuf: {00000000-0000-0000-0000-000000000000}.xml.dlpenc.rep
c:\programdata\mcafee\dlp\temp\logs\session#1\HDLP_te_(11.9.2019)(8.17-18).log
2019-09-11 13:00:20.259 [10340] [OERROR] [Rights Management Service] [`anonymous-namespace'::myLoadFailureHook] Cannot find a DLL to load ("msipc.dll")
2019-09-11 13:00:20.259 [10340] [OERROR] [Rights Management Service] [McAfee::DLP::RMS::ADRMSWrap::init] Cannot load the required DLL.
2019-09-11 13:00:20.259 [10340] [OERROR] [Rights Management Service] [MsDRMTextExtractorHelper::ensure_ready] Cannot initialize AD RMS; DRM error=18.
2019-09-11 13:00:20.693 [10340] [OERROR] [Text Extractor] [KvManager::createKvFile] Failed to fpOpenFile (c:\users\<username>\appdata\roaming\microsoft\windows\recent\automaticdestinations\f01b4d95cf55d32a.automaticdestinations-ms). Error - 13. Time Spent: 0 milliseconds
based on the logs I am tempted to completely remove and then reinstall agent and DLP on this test machine.
Corporate Headquarters
6220 America Center Drive
San Jose, CA 95002 USA
Consumer Support | Enterprise Support | McAfee.com for Enterprise
Legal | Privacy | Copyright © 2021 Musarubra US LLC | https://community.mcafee.com/t5/Data-Loss-Prevention-DLP/DLP-Actual-action-vs-Expected-Action/m-p/634799 | CC-MAIN-2022-21 | refinedweb | 998 | 57.98 |
View Poll Results: If you read it, did you find DirectJNgine User's Guide adequate?
- Voters
- 54. You may not vote on this poll
Yes
No
The paramsAsHash: true option is not supported, because I felt it will force us to receive values of unknown types, and I was concerned we might end up writing Java code that receives Object parameters all around (unknow type=> need to use Object as parameter type=> we loose Java's type safety) .
Type safety is one of the reasons for using Java at the server side, else why not using a more dynamic server side language?
Therefore, I'm trying to do writing that kind of code difficult -but not impossible.
That said, can you please provide a pair of concrete use cases in which paramsAsHash:true is needed? Will be glad to re-check my assumptions.
And, can you write here your client side call to 'load'?
I feel it is possible to get a great degree of flexibility without having to use the paramsAsHash: true thing.
Regards,
Pedro
Hi Pedro,
As we already started using DirectJNgine in our project (an open source ERP), and as you propounded the July, 22 for the final 1.0 release, can you tell us please something about the release date of the final DirectJNgine 1.0.
Never mind the date, just to know for planing.
Thank you very much.Best Regards,
Ramzi Youssef
MEDIACEPT Technology
We are releasing DirectJNgine 1.0 final today!
Though we had planned to release it in July 22, we decided to add some features that were planned for 1.1 to 1.0, moving a bit the release date: client-side parameter checking and direct JSON handling.
We hope they will be worth the one week delay!
To get DirectJNgine, go to
New: added support for handling JSON directly
From the User's Guide:
We have made every effort to handle serialization from JSON to Java, so that you can write methods that receive good old Java data types. However, there can be cases when you might need to access the JSON data directly for maximum flexibility...
New: client-side parameter checking (debug only)
We provide many client-side parameter checks. This is optional, and intended to be used in debug mode only.
Take a look at the User's Guide chapter on parameter checking for further info.
Improved: the User's Guide has grown to more than 40 pages.
We have added several new chapters:
- 4. Configuring a new project to use DirectJNgine.
- 9. Servlet configuration parameters.
- 11. Handling JSON data directly.
- 12. Checking client-side calls.
The chapter dedicated to form handling has been rewritten and enhanced to use the form 'api' configuration parameter.
New: additional servlet initialization parameters
- actionsNamespace: This allows us to provide a namespace for Direct actions.
- minify: this allows us to disable api file minification.
Take a look at the User's Guide for an explanation
Improved: fixes and minor enhancements
- Addressed issue with internationalization.
- Improved build.xml
...
Code breaking changes
The 'namespace' servlet intialization parameter has been renamed to 'apiNamespace'. In DirectJNgine 1.0 we have two "namespaces" now, one for apis and another one for actions, and we felt 'namespace' was misleading.
For further info, take a look at the User's Guide documentation on both 'apiNamespace' and 'actionsNamespace'.
...
Note on JDK 1.4 support
We have decided not to support JDK 1.4.
Unfortunately, we use many 1.5 features, especially generics.
Besides, we rely on several libraries that are 1.5 specific, such as Gson. We are very happy with Gson's performance and flexibility, and we feel it would be an important loss for DirectJNgine users not to use it for JSON handling.
...
If you want to take a look at the main DirectJNgine features, I recommend you take some time to read prior postings related to beta and RC versions in this thread, or just take a look at the "Features" chapter in the User's Guide.
...
Regards,
Pedro agulló
Thank you for the final 1.0 version. Now we can continue to use DirectJNgine and looking ahead.
Thank you to this feature that resolves an OpenJDK bug in Ubuntu (from the user guide, page 37):
By the way, it is highly unlikely that minification fails: we use the YUI ompressor, a very well tested minifier. However, if the YUI Compressor raises some exception or reports some error, we make sure that the minified file will contain at least standard code, so that your application does not break because there is no “-min.js” file.
Best Regards,
Ramzi Youssef
MEDIACEPT Technology
Ext.Direct Pack
Ext.Direct Pack
Hi,
I have a question for the ExtJS team:
Allowing that the DirectJNgine is in the final and stable release, can you add please in the Ext.Direct Pack (on ExtJS download page) a java folder with DirectJNgine (as cfml, DotNet, php and ruby) ?
Thank you for your response.Best Regards,
Ramzi Youssef
MEDIACEPT Technology
Hi
I lost lot of time on my DirectStore this afternoon, but i dont known if it is a bug.
So i have my store which call my method. This is ok.
My remote method is :
Code:
@DirectMethod public Status getLastStatus(){ return StatusList.getLast(); }
Code:
protected class Status{ private String message; private Date time; private int avancement; ...... }
Code:
{"result":{"message":"test2","time":"3 août 2009 17:37:24","avancement":6}, "tid":2,"action":"ThreadStatus","method":"getLastStatus","type":"rpc"}
Code:
{"result":[{"message":"test2","time":"3 août 2009 17:37:24","avancement":6}], "tid":2,"action":"ThreadStatus","method":"getLastStatus","type":"rpc"}
By the way, this happens when a method returns only one object, and not a list of object.....) | http://www.sencha.com/forum/showthread.php?73027-Ext-Direct-Java-based-implementation/page6 | CC-MAIN-2015-06 | refinedweb | 952 | 64.91 |
LINQ to SQL
LINQ to SQL is a powerful tool which allows developers to access databases as objects in C#. With LINQ to SQL, you are able to use LINQ query operators and methods instead of learning SQL. LINQ to SQL has an API for connecting and manipulating a database. LINQ queries and calls to the API methods are then translated to SQL commands which are executed to provide changes or to retrieve queries from the database. LINQ to SQL is a more modern way of accessing database using C# and .NET. Please do note that LINQ to SQL is only applicable when you are using SQL Server as your database.
We will be using the Northwind database as our sample database. If you are using the free Visual C# Express then we need to access the actual database file which has a .mdf file extension. If you installed the Northwind database correctly, the file can be found in C:SQL Server 2000 Sample Databases. If the file extension is not visible, go to Control Panel, choose Folder Options and click the View tab, and uncheck “Hide extensions for known file types”, then click OK. We will be using the Northwind.mdf database file. Database files are created with .mdf extensions whenever you create a database in SQL Server. SQL Server Express 2008 has a default folder for storing database files and it is located at C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLData. So if you executed the scripts for creating the Northwind database as instructed in the first lessons, then you can also find a copy of Northwind.mdf here. We can proceed if once you have possession of the Northwind.mdf file.
Visual Studio has a great tool for generating LINQ to SQL classes with the use of Object Relational Designer. You can simply drag and drop tables there and Visual Studio will automatically create the necessary classes the corresponds to each table and rows of the specified table and database. You will then see the tables with properties corresponding to its columns or fields. Arrows will also be visible representing relationships between tables.
Figure 1 – Object Relational Designer showing tables, fields and relationships.
Classes will be created that represents the rows of each table.For example, we have an Employees table. Visual Studio will automatically singularize the table’s name and an object named Employee will be created that will represent each row or records of the particular table.
A corresponding class of type Table<TEntity> of the System.Data.Linq namespaces will be created for every table included in the LINQ to SQL Designer. The TEntity is replaced with the class of the row it contains. For example, the Employees table will have a corresponding class of Table<Employee> class. An object of that class will then be created containing a collection of objects for each of its rows or records. Table<TEntity> implements IQueryable<TEntity> interface of the System.Linq namespace. When a LINQ queries an object that implements this interface and obtains results from a database, the results are automatically stored to the corresponding LINQ to SQL classes.
For related tables which are connected to each other via foreign keys, for each foreign key a table has, Visual Studio creates a corresponding property with the type and name similar to each row of the table the foreign key points to. Additionally, for a table whose primary key(s) are used as a foreign key by other tables, for each of those foreign tables, a property is also created. These additional properties allow you to call the properties of the foreign table of a current table. For example, two tables named Employees and Companies table both have CompanyID fields. The CompanyID is the primary key of the Companies table, and the CompanyID field of the Employees table is a foreign key pointing to the CompanyID of the Companies table. When Visual Studio creates the corresponding row class for each table, it will also consider the foreign keys. The Employee class for the Employees table will have an additional Company property since one of its columns points to the Companies table. The Company class of the Companies table will have an additional Employee property because the Employees table points to the Categories table.
LINQ to SQL also creates a DataContext class which inherits from the System.Data.Linq.DataContext. This class will be responsible for connecting the program and the database. The objects created for each table that you include in the LINQ to SQL designer becomes a property of this class. Visual Studio will automatically create the DataContext in a format <Database>DataContext where <Database> is the name of the database. For example, using our Northwind database, a NorthwindDataContext will be created with properties corresponding to each table we have included. These properties contain collections of objects representing the rows of each table. For example, our NorthwindDataContext class will have an Employees property which corresponds to the Employees table. This property is a collection of Employee objects representing each row of the table.
The next lesson will show you an example of using LINQ to SQL and connecting your application to the Northwind database using this technology. | https://compitionpoint.com/linq-to-sql/ | CC-MAIN-2021-31 | refinedweb | 871 | 54.22 |
This library can be installed using the unity package manager system (Unity >= 2018.4) with git
- In your unity project root open ./Packages/manifest.json
- Add the following line to the dependencies section "com.mixpanel.unity": "",
- Open Unity and the package should download automatically
Alternatively you can go to the releases page and download the .unitypackage file and have unity install that.
To start tracking with the Mixpanel Unity library, you must first initialize it with your project token. You can find your token by clicking your name in the upper righthand corner of your Mixpanel project and selecting Settings from the dropdown.
Configuring Mixpanel
To initialize the library, first open the unity project settings menu for Mixpanel. (Edit -> Project Settings -> Mixpanel) Then, enter your project token into the Token and Debug Token input fields within the inspector.
NOTE
You have the option to provide different tokens for debug and production builds of your project. Keeping data sets separate is important to maintain the integrity of the metrics you’re tracking with Mixpanel. It’s very easy to prevent these data sets from commingling, but hard to disentangle, so taking time up front is well worth it. First, create two separate Mixpanel projects – a "Production" project and a "Debug" project (Mixpanel doesn’t limit the number of projects you can use). Then, you can enter your "Production" and "Debug" project tokens into the Token and Debug Token input fields respectively.
C Sharp
Once you've initialized the library with your project token, you can import Mixpanel into your code using the mixpanel namespace.
Sending your First Event
Once you've initialized the library, you can track an event using Mixpanel.Track() with the event name and properties.
var props = new Value(); props["Gender"] = "Female"; props["Plan"] = "Premium"; Mixpanel.Track("Plan Selected", props);
You can track the time it took for an action to occur, such as an image upload or a comment post, using Mixpanel.StartTimedEvent This will mark the "start" of your action, which you can then finish with a track call. The time duration is then recorded in the "Duration" property.
It's very common to have certain properties that you want to include with each event you send. Generally, these are things you know about the user rather than about a specific event—for example, the user's age, gender, or source.
To make things easier, you can register these properties as super properties. If you do, we will automatically include them with all tracked events. Super properties are saved to device storage, and will persist across invocations of your app.
To set super properties, call Mixpanel.Register.
// Send a "User Type: Paid" property will be sent // with all future track calls. Mixpanel.Register("User Type", "Paid");
Going forward, whenever you track an event, super properties will be included as properties. For instance, if you call
after making the above call to Mixpanel.Register, it is just like adding the properties directly:
var props = new Value(); props["signup_button"] = "test12"; props["User Type"] = "Paid"; Mixpanel.Track("signup", props);
Setting Super Properties Only Once
If you want to store a super property only once (often for things like ad campaign or source), you can use Mixpanel.RegisterOnce. This function behaves like Mixpanel.Register and has the same interface, but it doesn't override super properties you've already saved.
This means that it's safe to call Mixpanel.RegisterOnce with the same property on every app load, and it will only set it if the super property doesn't exist..');
NOTE
Calling Mixpanel.Identify with a new ID will change the distinctID stored on the device. Updates to user profiles are queued on the device until identify is called.
Combining Anonymous User Data. Mixpanel.Alias("13793");
The recommended usage pattern is to call both Mixpanel.Alias and Mixpanel.Identify (with the Mixpanel generated distinct ID, as shown in the example above) when the user signs up, and only Mixpanel.Identify (with the aliased user ID) on future log ins. This will keep your signup funnels working correctly.
NOTE
If you use Mixpanel.Alias, we recommend only calling it once during the lifetime of the user.'s User Analytics product., or push notifications.
NOTE
Before you send profile updates, you must call Mixpanel.Identify. This ensures that you only have registered users saved in the system.
Setting Profile Properties
You can set properties on a user profile with Mixpanel.people.Set.
// mixpanel identify: must be called before // user profile properties can be set Mixpanel.Identify("13793"); // Sets user 13793's "Plan" attribute to "Premium" Mixpanel.People.Set("Plan", "Premium");."
NOTE values cannot be longer than 255 characters. In practice they should be much shorter than that. Property names get cut off by our user interface at about 20 characters.
Click here to see a list of Mixpanel's reserved user profile properties.
Incrementing Numeric Properties
You can use Mixpanel.people.Increment to change the current value of numeric properties. This is useful when you want to keep a running tally of things, such as games played, messages sent, or points earned.
// Here we increment the user's point count by 500. Mixpanel.People.Increment("point count", 500);
Other Types of Profile Updates
There are a few other types of profile updates. To learn more, please see the full API reference.
Mixpanel makes it easy to analyze the revenue you earn from individual customers. By associating charges with user profiles, you can compare revenue across different customer segments and calculate things like lifetime value.
You can track a single transaction with Mixpanel.people.TrackCharge. This call will add transactions to the individual user profile, which will also be reflected in the Mixpanel Revenue report.
// Make sure identify has been called before making revenue // updates Mixpanel.Identify("13793"); // Tracks $100 in revenue for user 13793 Mixpanel.People.TrackCharge(100); // Refund this user 50 dollars Mixpanel.People.TrackCharge(-50); // Tracks $25 in revenue for user 13793 on the 2nd of // January var props = new Value(); props["time"] = "2012-01-02T00:00:00"; Mixpanel.People.TrackCharge(25, props);
The Mixpanel library includes support for sending iOS push notification device tokens to Mixpanel. Once you send a device token, you can use Mixpanel to send push notifications to your app.
NOTE
Android push notifications are not supported through this Mixpanel library or through the Unity Notification Services class.
You can send a device token to Mixpanel from the Unity NotificationServices.RegisterForNotifications class, using Mixpanel.people.PushDeviceToken.
using UnityEngine; using System.Collections; using NotificationServices = UnityEngine.iOS.NotificationServices; using NotificationType = UnityEngine.iOS.NotificationType; using mixpanel; public class NotificationRegistrationExample : MonoBehaviour { bool tokenSent; void Start() { tokenSent = false; NotificationServices.RegisterForNotifications( NotificationType.Alert | NotificationType.Badge | NotificationType.Sound); } void Update () { if (!tokenSent) { byte[] token = NotificationServices.deviceToken; if (token != null) { // Make sure identify has been called // before sending a device token. Mixpanel.Identify("13793"); // This sends the deviceToken to Mixpanel #if UNITY_IOS Mixpanel.People.PushDeviceToken = token; #endif tokenSent = true; } } } } | https://developer.mixpanel.com/docs/unity | CC-MAIN-2019-51 | refinedweb | 1,156 | 50.63 |
We!
Get it here
What’s new.
Documentation of the key DACFx API namespaces involved in extensibility scenarios can be found on MSDN here. Additionally, walkthrough samples for the creation of build and deployment contributors can be found in the SSDT documentation here.
Updated Data-Tier Application Framework
This release includes the June 2013 release of SQL Server Data-Tier Application Framework (DACFx), which contains several feature enhancements and bug fixes.
Related information:.
Supporting links
SQL Server 2014 CTP1 –
Please provide your feedback and ask us questions on our forum or via Microsoft Connect. We look forward to hearing from you.
Join the conversationAdd Comment
Hi Janet,
There's some interesting sounding stuff here. Is there any additional documentation available regarding the Deployment Plan Modifier? I'd like to know how and where to implement these things as I'm hoping they are a solution to many of the limitations of DAC deployments.
thanks
Jamie
Fantastic! We've been anxiously awaiting the Data Compare tool. This should be super helpful in keeping all the workflow within Visual Studio.
Very Cool! The team is really pushing out updates quick! SSDT really helps me out a lot and is a great product. Thanks so much.
I just downloaded the new June 2013 update and noticed a couple new Build Actions on .sql files: "Build Extension Configuration" and "Deployment Extension Configuration"- what do these do?
Previously, I was having problems with SQL71502 errors on a stored procedure that referenced a linked server via a merge statement. The SQL71502 error prevented the projected from building, and removing the .sql file from the build prevented the schema compare tool from seeing that the file was already in the project. This was a problem that I could get around but was pretty annoying. NOW, with the new update, it changed the SQL71502 error to a warning so my project builds and the schema compare works too. Thanks a million!
Any Data Generation announcements here or plans for that in the near future or releases?
Thanks.
Hassan
Visual studio, SSDt check for updates does not present the June 2013 release as an available update.
The DACFx direct download links go.microsoft.com/fwlink and go.microsoft.com/fwlink seems to still point to the Mar 2013 update.
@dirq – thanks for the positive feedback and we're happy about the value you're getting from SQL Server!
@Hassan – we've updated the blog post to be more clear about Data Generation, which is not included in the current or any future release. Bummer!
@Srinivasn Raju – we'll update our "check for updates" pipeline in the next few days to prompt existing users.
thanks – Erick
I'm happy this is finally here. I'll use the connect feature to request that the data compare add the "Use [database]" in the script it generates because I had a "DOH" moment when first trying to use the script and forgetting to change to the database after making the connection.
Thanks again as this was holding up the Visual Studio 2012 deployment.
Hi,.
Thanks,
Anjam
I am trying to install SSDT-BI for VS 2012 but it gets to the "Installation Configuration Rules" page and then fails the "Same architecture installation" test. When I click the link for more information, the dialog box says, "The CPU architecture of installing feature(s) is different than the instance specified. To continue, add features to this instance with the same architecture." The instance I am trying to install it to is SQL Server 2012 Developer Edition. Is there anything special I have to do?
I think I figured out the solution to my comment above — when the installer prompts you to select "shared installation" vs "database instance-specific installation" make sure you select the shared installation option. Then it is able to continue the installation.
What third party tooling is recommended for data generation plans? Data Generation is not included and is not planned to be released in the future. Third party tooling will have to be relied upon for that functionality.
(bump).
Hi Janet,
Very well done. SSDT is a fantastic tool bringing me huge value, and I am loving it since the first release! And now I cannot even think about my development experience without it…
Please keep up with this outstanding work an continue updating it: this update and hearing you will continue with this project makes me very happy.
I am also very interested in SSDT extensibility: could you point me to some docs about it?
Thank you very much
Alberto
I can't install this update. Does anyone have any ideas? There's an error on installation: "Element not found". Looking at the log file shows these lines:
[169C:1688][2013-06-28T11:08:29]: Error 0x80070490: Failed to verify expected payload certificate with any certificate in the actual certificate chain.
[169C:1688][2013-06-28T11:08:29]: Error 0x80070490: Failed to verify signature of payload: DACFX11X86
[169C:1688][2013-06-28T11:08:29]: Failed to verify payload: DACFX11X86 at path: C:ProgramDataPackage Cache.unverifiedDACFX11X86, error: 0x80070490. Deleting file.
[169C:1688][2013-06-28T11:08:29]: Error 0x80070490: Failed to cache payload: DACFX11X86
Guys, you tire me with your idea that on-line installers rules the world. You made that thing for Visual Studio Updates, and now you made it for SSDT updates. Sadly.
SSDT December 2012 was distributed as ISO. VS Update 2 & 3 are available as ISO. Why don't you release SSDT June 2013 as ISO?
And don't suggest me using sluggish "SSDTSetup.exe /layout <destination>". Because SSDTSetup.exe requires Windows Vista (when I run it on Windows XP I've got an error… something like "entry point to FlsFree not found in kernel32.dll"; FlsFree is available on Vista and above).
My working PC has Windows 7 SP1 but it DOESN'T have Internet access. The only PC with Internet access I have is Windows XP PC. Where I CAN'T run your "downloader"! So tell me: why don't you share ISO or something like this so one can download that WITHOUT requiring for Windows Vista! Why I can't _download_ something from OS other than you dictate? Downloading an update on "PC 1" doesn't mean, dear MSFT, that I intend to install it there!
Yes, SSDT is great tool! I agree with comments above. But to distribute it in the way MSFT does — is real bad idea. Sorry, but I'm fed up with you!
@Jamie @Anjam @Piggy Documentation and walkthroughs for Extensibility, Build and Deployment Contributors and all other new features will be available soon. Sorry for the delay!
@Brigadir ISO downloads coming early next week
Hi Janet,
Thanks for letting us know. I can not wait for the docs.
Regards,
Anjam
Hi Janet,
Thank you very much. Will check the SSDT pages to get the docs!
Have a great weekend
Alberto
If you install the VS 2012 update, it breaks 2010. So, if you install one, install both. Assuming you're running a side-by-side installation.
@Brigadir totally agree with you, the concept how MSFT is releasing software is like a kid that discovers each week a new toy. The problem however with .ISO is that their OS didn't support it and one was forced to install a spam software like Daemon Tools (even promoted by MSDN).
Never the less, I have installed this update and I can't find the data compare. In VS2012 it says after SQL Data Tools 11.1.21208.0 on the download page for SSDT June I see this version: 11.0.3369.0. Now that SSDT June must be the latest version I wonder how my version number can be newer than the one available for download.
@nojetlag I use Virtual CloneDrive to mount ISOs, it's not spammy at all. Try that.
@Jamie @Anjam @Piggy – Documentation of the new/updated DACFx APIs is available on MSDN here – msdn.microsoft.com/…/bb522480.aspx. The core extensibility functionality is contained in the Dac, Model, and Deployment namespaces. Walkthroughs for the creation of build and deployment contributors are also available on the SSDT documentation here – msdn.microsoft.com/…/dn268597(v=vs.103).aspx. Sorry for the delay in getting the documentation posted, thanks for your patience.
Thanks,
Adam
No test data generator yet? 🙁
@nojetlag
>The problem however with .ISO is that their OS didn't support it.
It ain't a problem. MSFT doesn't want to use ISO? Well, release your updates as ZIP archive! Instead of force me to use MSFT proprietary "download software" give me a DIRECT link to a ZIP archive that contains everything "SSDTSetup.exe /layout <destination>" downloads. So I will not do that by myself to download the update from PC with "unsupported OS".
nojetlag, I've downloaded the update with "SSDTSetup.exe /layout <destination>" on Windows 7 PC. After I've installed the update on target (another) PC (also Windows 7 SP 1 ENU) Visual Studio 11 shows version 11.1.30618.1 for SSDT. As the OS control panel does. But "on the download page for SSDT June" I see no version number 🙁
Data comparison is here (inside VS 11): open "SQL Server Object Explorer" window, expand "SQL Server" node, expand "Your-SQL-server-name" node, then — "Databases" node. Right click on a database name child node under "Databases" node — the third menu item from the top is "Data Comparison…" Is that what you were looked for?
@Janet Yeilding
>ISO downloads coming early next week
The second half of the "next week" is coming. Where's ISO you promised? Or "next week" is the week starting on 7/8/2013 (monday)?
Finally Installed the June SSDT update. Tried data compare – it looks like the comparison was spot on. Then I told it to update the target… Once it finally got to 'writing updates' Visual Studio puppoed up with the Visual Studio 2012 Stopped Working alert.
I guess I'll wait a few revisions. 🙁
My build server doesn't have internet access either. ISO please…
"@Brigadir ISO downloads coming early next week"
So "early next week" comes and goes and still no sign of an ISO download.
ISOs are available for download at these locations:
msdn.microsoft.com/…/jj650015
msdn.microsoft.com/…/jj650014
Janet,
Great work on the update and thanks for the ISO file. It makes enterprise deployments much easier. In the post you state "This release includes the June 2013 release of SQL Server Data-Tier Application Framework (DACFx), which contains several feature enhancements and bug fixes." Does this mean the updates for DACFx are included in the install or are there separate MSI's for these updates? It's my understanding that I needed to install SSDT and then all 6 of the MSI's from this download page.…/details.aspx However, it appears these were last updated in March.
Thanks! – Brad
@Brad SSDT will automatically chain in the June 2013 version of DACFx during installation, so you don't need to install any MSIs separately. The link you provided to the March release is not the most recent DACFx version – that's available here:…/details.aspx. You can read more about the features and fixes in June DACFx release here: blogs.msdn.com/…/sql-server-data-tier-application-framework-june-2013-available.aspx
@Janet Yeilding "ISOs are available for download"
Finally. Thanks a lot!
Looks really cool, and the tool is almost perfect. One thing I miss is the option to "IgnoreColumnOrder" when doing a schema compare!
Could you please give us back that option?
Thanks
Kenneth
Would like to see a strategy for managing sample data sets along side schema in a future release. Should we log this on the Visual Studio user voice site?
HI All,
Please note the following while installing SSDT BI:
. You can still use the existing version of SSDT BI to create and deploy projects for SQL Server 2014 CTP1.
· SSDT BI cannot be installed on the same machine as SQL Server 2014 CTP1.
· To deploy to a SQL Server 2014 CTP1 Server, install SSDT-BI on different machine.
Reference: blogs.msdn.com/…/how-do-i-get-sql-server-data-tools-business-intelligence-ssdt-bi-for-sql-server-2014-ctp1-bits.aspx
Hi,Nice demonstration about Announcing SQL Server Data Tools.Thanks.
<a href="theosoftindia.com/…/a>
Question, I just installed this update and my Visual Studio projects that were building with warning now just fail.
It's mostly due to other developers not fully qualifying ambiguous names. Is there any way to set those types of errors to warnings again?
Yanet, we are highly interested in the Visual Studio data tooling features you mentioned, especially for data deployment requirements. Any news on this?
What would be the best way to validate hundreds of SQL Server databases (schema, etc.) in prod environment? Would like to avoid installing VS.NET and running one by one. 😉
What is so cool about this? In previous versions of Visual Studio we practically had SQL Server Management Studio built-in. Now it gets replaced with this primitive tool in which you can no longer do things graphically. Now it is almost all command line. This is like going from Windows back to DOS.
I have just used data compare and synchronize for my team. It's actual fast and cool.
How do you set the command timeout for the sql data compare ? it constantly times out after 30 mins when updating to a windows azure database. Our internet connection is 40 mbs uload speed so quite fast.
Is there a way or are you planning on adding a way to apply rules to the data comparison? i.e. when comparing decimal values in two db's we only really care to the Nth decimal place (due to inconsitency in calculations) so would want to compare rounded values rather than the actual values in the db table.
You can try Devart's dbForge Data Compare for SQL Server –…/datacompare | https://blogs.msdn.microsoft.com/ssdt/2013/06/24/announcing-sql-server-data-tools-june-2013/ | CC-MAIN-2019-04 | refinedweb | 2,337 | 66.44 |
Choose the most appropriate alternative from the options given below to complete the following
sentence:
Despite several ––––––––– the mission succeeded in its attempt to resolve the conflict.
Choose the most appropriate alternative from the options given below to complete the following
sentence:
Suresh’s dog is the one ––––––––– was hurt in the stamped
y =2x - 0.1x2
dy/dx = 2-0.2x
d2y/dx2 <0
y max imises at 2- 0.2x= 0
x =10
y= 20 - 10 =10m
Given the sequence of terms, AD CG FK JP, the next term is _________ .
Q. 1 – Q. 5 carry one mark each.
The cost function for a product in a firm is given by 5q2, where q is the amount of production. The
firm can sell the product at a market price of
50 per unit. The number of units to be produced by
the firm such that the profit is maximized is
P = 50q - 5q2
dp/dq = 50-10q; d2p/dq2 <0
p is maximum at 50- 10q = 0 or q=5
Else check with options
Which one of the following options is the closest in meaning to the word given below?
Mitigate
Choose the grammatically INCORRECT sentence:
Q. 6 - Q. 10 carry two marks each.
W anted is not mentioned in the advertisement and (B) clearly eliminated
Which of the following assertions are CORRECT?
P: Adding 7 to each entry in a list adds 7 to the mean of the list
Q: Adding 7 to each entry in a list adds 7 to the standard deviation of the list
R: Doubling each entry in a list doubles the mean of the list
S: Doubling each entry in a list leaves the standard deviation of the list unchanged
P and R always hold true
Else consider a sample set {1, 2, 3, 4} and check accordingly
Q. 11 – Q. 35 carry one mark each.
Q.?” ) ; }
A ssuming P ≠ NP, which of the following is TRUE?
T he worst case running time to search for an element in a balanced binary search tree with n2n
elements is
The truth table
The decimal value 0.5 in IEEE single precision floating point representation has
A process executes the code
fork();
fork();
fork();
The total number of child processes created is
Consider the function f(x) = sin(x) in the interval x
[π/4, 7π/4]. The number and location(s) of the
local minima of this function are
Sin x has a maximum value of 1 at ,
π/2 and a minimum value of –1 at
3π/2 and at all angles conterminal with them.
The graph of f (x) = sin x is
In the interval
, it has one local minimum x= 3π/2
The protocol data unit (PDU) for the application layer in the Internet stack is
The PDU for Datalink layer, Network layer , Transport layer and Application layer are frame,
datagram, segment and message respectively.
Let A be the 2 × 2 matrix with elements a11 = a12 = a21 = +1 and a22 = −1. Then the eigenvalues of
the matrix A19 are
What is the complement of the language accepted by the NFA shown below?
Assume
= {a} and ε is the empty string.
Language accepted by NFA is a+, so complement of this language is {ε}
What is the correct translation of the following statement into mathematical logic?
“Some real numbers are rational”
Option A: There exists x which is either real or rational and can be both.
Option B: All real numbers are rational
Option C: There exists a real number which is rational.
Option D: There exists some number which is not rational or which is real.
Given the basic ER and relational models, which of the following is INCORRECT?
The term ‘entity’ belongs to ER model and the term ‘relational table’ belongs to relational model.
A and B both are true. ER model supports both multivalued and composite attributes See this for more details.
(C) is false and (D) is true. In Relation model, an entry in relational table can can have exactly one value or a NULL.
If we use a HAVING clause without a GROUP BY clause, the HAVING condition applies to
all rows that satisfy the search condition. In other words, all rows that satisfy the search
condition make up a single group. So, option P is true and Q is false.
S is also true as an example consider the following table and query.0
Select count (*)
From student
Group by Name
Output will be
The recurrence relation capturing the optimal execution time of the Towers of Hanoi problem with
n discs is
Let the three pegs be A,B and C, the goal is to move n pegs from A to C using peg B
The following sequence of steps are executed recursively
1.move n−1 discs from A to B. This leaves disc n alone on peg A --- T(n-1)
2.move disc n from A to C---------1
3.move n−1 discs from B to C so they sit on disc n----- T(n-1)
So, T(n) = 2T(n-1) +1
Let G be a simple undirected planar graph on 10 vertices with 15 edges. If G is a connected graph,
then the number of bounded faces in any embedding of G on the plane is equal to
We have the relation V-E+F=2, by this we will get the total number of faces,
F = 7. Out of 7 faces one is an unbounded face, so total 6 bounded faces.
Let W(n) and A(n) denote respectively, the worst case and average case running time of an
algorithm executed on an input of size n. Which of the following is ALWAYS TRUE?
The average case time can be lesser than or even equal to the worst case. So A(n) would be
upper bounded by W(n) and it will not be strict upper bound as it can even be same (e.g.
Bubble Sort and merge sort).
A(n) = O(W(n))
The amount of ROM needed to implement a 4 bit multiplier is
For a 4 bit multiplier there are 24 × 24 = 28 = 256 combinations.
Output will contain 8 bits.
So the amount of ROM needed is 28 ×8 bits = 2Kbits.
Register renaming is done in pipelined processors
Register renaming is done to eliminate WAR/WAW hazards.
Consider a random variable X that takes values +1 and −1 with probability 0.5 each. The values of
the cumulative distribution function F(x) at x = −1 and +1 are
The cumulative distribution function
F(x) = P(X
x)
F(-1) = P(X
-1) = X=-1) =0.5
F(+1)= P(X
+1) = P(X=-1)+P(X=-1) = 0.5 + 0.5 =1
Which of the following transport layer protocols is used to support electronic mail?
E-mail uses SMTP, application layer protocol which intern uses TCP transport layer protocol.
In the IPv4 addressing format, the number of networks allowed under Class C addresses is
For class C address, size of network field is 24 bits. But first 3 bits are fixed as 110; hence
total number of networks possible is 221
Which of the following problems are decidable?
1) Does a given program ever produce an output?
2) If L is a context-free language, then, is
also context-free?
3) If L is a regular language, then, is
also regular?
4) If L is a recursive language, then, is
also recursive?
CFL’s are not closed under complementation. Regular and recursive languages are closed
under complementation.
Given the language L = {ab, aa, baa}, which of the following strings are in L*?
1) abaabaaabaa
2) aaaabaaaa
3) baaaaabaaaab
4) baaaaabaa
L ={ab, aa, baa}
Let S1 = ab , S2 = aa and S3 =baa
abaabaaabaa can be written as S1S2S3S1S2
aaaabaaaa can be written as S1S1S3S1
baaaaabaa can be written as S3S2S1S2
Q. 36 to Q. 65 carry two marks each.
Q.
W hich of the following graphs is isomorphic to
The graph in option (A) has a 3 length cycle whereas the original graph does not have a 3 length cycle
The graph in option (C) has vertex with degree 3 whereas the original graph does not have a
vertex with degree 3
The graph in option (D) has a 4 length cycle whereas the original graph does not have a 4 length cycle
Let S be a non-serial schedule, without loss of generality assume that T1 has started earlier than T2. The first instruction of T1 is read(P) and the last instruction of T2 is write(P), so the precedence graph for S has an edge from T1 to T2, now since S is a non-serial schedule the first instruction of T2(read(Q)) should be executed before last instruction of T1(write(Q)) and since read and write are conflicting operations, the precedence graph for S also contains an
edge from T2 to T1, So we will have a cycle in the precedence graph which implies that any non serial schedule with T1 as the earliest transaction will not be conflict serializable.
In a similar way we can show that if T2 is the earliest transaction then also the schedule is not conflict serializable.
The bisection method is applied to compute a zero of the function f(x) = x4 – x3 – x2 – 4 in the
interval [1,9]. The method converges to a solution after ––––– iterations.?
What is the minimal form of the Karnaugh map shown below? Assume that X denotes a don’t care
term.
Consider the 3 processes, P1, P2 and P3 shown in the table.
The completion order of the 3 processes under the policies FCFS and RR2 (round robin scheduling
with CPU quantum of 2 time units) are
For FCFS Execution order will be order of Arrival time so it is P1,P2,P3
Next For RR with time quantum=2,the arrangement of Ready Queue will be as follows:
RQ: P1,P2,P1,P3,P2,P1,P3,P2
This RQ itself shows the order of execution on CPU(Using Gantt Chart) and here it gives the completion order as P1,P3,P2 in Round Robin algorithm.
1. Acquire lock (L) {
2. While (Fetch_And_Add(L, 1))
3. L = 1.
}
4. Release Lock (L) {
5. L = 0;
6. }
Let P and Q be two concurrent processes in the system currently executing as follows
P executes 1,2,3 then Q executes 1 and 2 then P executes 4,5,6 then L=0 now Q executes 3
by which L will be set to 1 and thereafter no process can set
L to zero, by which all the processes could starve.
Suppose a fair six-sided die is rolled once. If the value on the die is 1, 2, or 3, the die is rolled a
second time. What is the probability that the sum total of values that turn up is at least 6??
Since half of 4096 host addresses must be given to organization A, we can set 12th bit to 1 and include that bit into network part of organization A, so the valid allocation of addresses to A is 245.248.136.0/21
Now for organization B, 12th bit is set to ‘0’ but since we need only half of 2048 addresses,
13th bit can be set to ‘0’ and include that bit into network part of organization B so the valid allocation of addresses to B is 245.248.128.0/22
The counter example for the condition full : REAR = FRONT is
Initially when the Queue is empty REAR=FRONT=0 by which the above full condition is satisfied which is false
The counter example for the condition full : (FRONT+1)mod n =REAR is
Initially when the Queue is empty REAR=FRONT=0 and let n=3, so after inserting one element REAR=1 and FRONT=0, at this point the condition full above is satisfied, but still there is place for one more element in Queue, so this condition is also false
The counter example for the condition empty : (REAR+1)mod n = FRONT is
Initially when the Queue is empty REAR=FRONT=0 and let n=2, so after inserting one element REAR=1 and FRONT=0, at this point the condition empty above is satisfied, but the queue of capacity n-1 is full here
The counter example for the condition empty : (FRONT+1)mod n =REAR is
Initially when the Queue is empty REAR=FRONT=0 and let n=2, so after inserting one element REAR=1 and FRONT=0, at this point the condition empty above is satisfied, but the queue of capacity n-1 is full here
Consider
Access link is defined as link to activation record of closest lexically enclosing block in
program text, so the closest enclosing blocks respectively for A1 ,A2 and A21 are main , main and A2
How many onto (or surjective) functions are there from an n-element (n
2) set to a 2-element set?
Total number of functions is 2n, out of which there will be exactly two functions where all
elements map to exactly one element, so total number of onto functions is 2n-2
Let G be a complete undirected graph on 6 vertices. If vertices of G are labeled, then the number of
distinct cycles of length 4 in G is equal to
A list of n strings, each of length n, is sorted into lexicographic order using the merge-sort
algorithm. The worst case running time of this computation is
The height of the recursion tree using merge sort is logn and n2 comparisons are done at each
level, where at most n pairs of strings are compared at each level and n comparisons are
required to compare any two strings, So the worst case running time is O( n2 log.
Let d[v] represent the shortest path distance computed from ‘S’
Initially d[S]=0, d[A] = ∞, d[B] = ∞.− − − −,d[T] = ∞
And let P[v] represent the predecessor of v in the shortest path from ‘S’ to ‘v’ and let P[v]=-1
denote that currently predecessor of ‘v’ has not been computed
→ Let Q be the set of vertices for which shortest path distance has not been computed
→ Let W be the set of vertices for which shortest path distance has not been computed
→ So initially, Q = {S,A,B,C,D,E,F,G,T},W = f
We will use the following procedure
Repeat until Q is empty
{
1 u = choose a vertex from Q with minimum d[u] value
2. Q = Q − u
3. update all the adjacent vertices of u
4. W = W U{u}
}
d[S] = 0, d[A] = ∞, d[B] = ∞ ,………, d[T] = ∞
Step 1 : u = S
Step 2 : Q ={A}
Iteration 2:
Step 1: u= B
Step 2 :Q =,B}
Iteration 3:
Step 1: u= A
Step 2 :Q ={C,D,E,F,G,T}
step 3: final values after adjustment
d[S] = 0,d[A] = 4, d[B] = 3,d[C] = 5,d[D] = 7,d[E] = ∞ − −−,d[T] = ∞
P[A] = S, P[B] = S,P[C] = A,P[D] = S,P[E] = −1− −−,P[T] = −1
Step 4 : W={S,B,A}
Iteration 4:
Step 1: u= A
Step 2 :Q ={D,E,F,G,T}
step 3: final values after adjustment
d[S] = 0,d[A] = 4, d[B] = 3,d[C] = 5,d[D] = 7,d[E] = 6 − −−,d[T] = ∞
P[A] = S, P[B] = S,P[C] = A,P[D] = S,P[E] = −1− −−,P[T] = −1
Step 4 : W={S,B,A,C}
Iteration 5:
Step 1: u= E
Step 2 :Q ={D,F,G,T}
step 3: final values after adjustment
d[S] = 0,d[A] = 4, d[B] = 3,d[C] = 5,d[D] = 7,d[E] = 6,d[F] = ∞,d[G] = 8,d[T] = 10
P[A] = S, P[B] = S,P[C] = A,,P[D] = S,P[E] =C, P[F]= −1, P[G]=E, P[T] = E
Step 4 : W={S,B,A,C,E}
After iteration 5, we can observe that P[T]=E , P[E]=C , P[C]=A , P[A]=S,
So the shortest path from S to T is SACET
Each block size = 128 Bytes
Disk block address = 8 Bytes
Each disk can contain =
128
16
8
= addresses
Size due to 8 direct block addresses: 8 x 128
Size due to 1 indirect block address: 16 x 128
Size due to 1 doubly indirect block address: 16 x 16 x 128
Size due to 1 doubly indirect block address: 16 x 16 x 128
FIFO
1 1 1 4 4 4
2 2 2 1 1
3 3 3 2 → (6) faults
Optimal
1 1 1 1 1
2 2 4 4
3 3 2 → (5) faults
LRU
1 1 1 4 4 4 2 2 2
2 2 2 2 3 3 3 1
3 3 1 1 1 4 4 → (9) faults
Optimal < FIFO < LRU
Suppose R1(A, B) and R2(C, D) are two relation schemas. Let r1 and r2 be the corresponding
relation instances. B is a foreign key that refers to C in R2. If data in r1 and r2 satisfy referential
integrity constraints, which of the following is ALWAYS TRUE?
Consider a source computer (S) transmitting a file of size 106 bits to a destination computer (D)
over a network of two routers (R1 and R2) and three links (L1, L2, and L3). L1 connects?
Transmission delay for 1 packet from each of S, R1 and R2 will take 1ms
Propagation delay on each link L1, L2 and L3 for one packet is 1ms
Therefore the sum of transmission delay and propagation delay on each link for one packet is
2ms.
The first packet reaches the destination at 6thms
The second packet reaches the destination at 7thms
So inductively we can say that 1000th packet reaches the destination at 1005th ms.
Consider
; // Box 1
else { h1 = height (n → left);
if (n → right == NULL) return (1+h1);
else { h2 = height (n → right);
return
; // Box 2
}
}
}
The appropriate expressions for the two boxes B1 and B2 are
int height (treeptr n)
{ if (n = = nu11) return – 1;
→/* If there is no node, return -1 */
if (n→left ==NULL) ® / * If there is no left child for node 'n ' * /
if (n→right == NULL) return O;
→ / *If no left child & no right child for 'n ', return */
else return (1+height (n→right ) );
→/* If no left child, but there is a right child, then compute height
for right sub tree. Therefore total height is 1+ height (n→right ) */
else { → / * If there exist left child node for node ‘n’ */
h1 = height (n→left );
→ / * First Find the height of left sub tree for node ‘n’ */
If (n → right == NULL) return (1+ h1);
|CS-GATE-2012 PAPER|
India’s No.1 institute for GATE Training 1 Lakh+ Students trained till date 65+ Centers across India
11
→/ * If there exist left child and no right child and no right child
for a node ‘n’, then total height
= height from (n to n→left ) + left sub tree height
=1 + height (n →left ) = 1 + h1 */
else {h2 = height (n →right ) ;
→/* If there exist right child also, then compute height of right
sub tree for a node ‘n’ */
return ( ( )) 1 2 1+ max h , h ;
→ / * Total height for node ‘n’=
1 + Max (Left Subtree height, Right sub tree height)
= 1 + Max (h1, h2) */
}
} if:
Line 1 is replaced by auto int a = 1;
Line 2 is replaced by register int a = 2;
Consider the following relations A, B and C:
How many tuples does the result of the following relational algebra expression contain? Assume
that the schema of A∪B is the same as that of A.
Consider the following relations A, B and C:
How many tuples does the result of the following SQL query contain? FIRST and FOLLOW sets for the non-terminals A and B are appropriate entries for E1, E2, and E3 number of bits in the tag field of an address size of the cache tag directory is
Doc | 5 Pages
Doc | 1 Page
Doc | 1 Page
Doc | 1 Page
Test | 65 questions | 180 min
Test | 65 questions | 180 min
Test | 65 questions | 180 min
Test | 65 questions | 180 min
Test | 65 questions | 180 min | https://edurev.in/course/quiz/attempt/-1_Computer-Science-And-Information-Technology-CS-201/e562ce8e-be07-42b3-91e8-8b80c5225247 | CC-MAIN-2021-31 | refinedweb | 3,400 | 59.26 |
Notice that launch is a non-blocking call that creates and adds a coroutine to the queue. If you want to create a blocking call to create and add a coroutine then use the coroutineScope function. This creates a new scope that only unblocks when all of the coroutines that have been created within it have finished.
For example, if you change launch to coroutineScope:
import kotlinx.coroutines.coroutineScope
import kotlinx.coroutines.delayimport kotlinx.coroutines.runBlocking
fun main() {
println("main start") runBlocking {
println("Coroutine1 start")
coroutineScope {
co2() }
for (i in 1..20) {
print(i)
delay(1)
}
println(" Coroutine1 finishing")
}
println("main stopped")
}
suspend fun co2() {
println("Coroutine2 start")
for (i in 1..10) {
print(i)
delay(1)
}
println(" Coroutine2 finishing")
}
you will see:
main start
Coroutine1 start
Coroutine2 start
12345678910 Coroutine2 finishing
1234567891011121314151617181920 Coroutine1 finishing
main stopped
Notice that now Coroutine1 is suspended until Coroutine2 has finished even if Coroutine2 suspends itself repeatedly for 1ms. This is the idea of structured asynchronous code. Each coroutineScope block does not move on until all of the coroutines it contains have finished. You can also use the block to cancel, or otherwise modify, all of the coroutines it contains and if a contained coroutine fails then the entire block fails, more of this later. You can use coroutineScope blocks to organize and control your coroutines.
At this point you might be wondering what the difference is between coroutineScope and runBlocking? The answer is that you can use runBlocking from a non-suspending, i.e. a standard function, but you can only use coroutineScope from within another suspending function. In other words, you can use runBlocking to get coroutines started from the main function, but after that you should use coroutineScope to repurpose the thread.
Now we have two ways to run coroutines – launch which places a coroutine in the dispatcher’s queue and returns, and coroutineScope which does the same but then waits until the coroutine has completed. This means you can now schedule coroutines to run sequentially or concurrently.
In the case of sequentially:
runBlocking {
coroutineScope {
co1()
}
coroutineScope {
co2()
}
}
the outer coroutine places co1 and then co2 into the queue. The co1 function is run when the outer coroutine ends and runs to completion, even if it is suspended multiple times, before the co2 function is run. The runBlocking doesn’t return until both co1 and co2 have completed. If the two functions being called are marked as suspend you can drop the use of coroutineScope and simply call the functions:
runBlocking {
co1()
co2()
}
This works because both functions can contain suspension points and are therefore run asynchronously even if they occupy a single position in the dispatcher’s queue. That is, co2 will not run until co1 has finished even if it suspends. This form is slightly more efficient but only works with functions that have the suspend modifier and the functions are treated as a single job.
The alternative is concurrently:
runBlocking {
launch {
co1()
}
launch {
co2()
}
}
In this case co1 and co2 are added to the dispatcher’s queue as before and in a single threaded dispatcher co1 is started when the outer coroutine is finished. The difference is that now co2 is started is co1 suspends and co1 only restarts if co2 suspends or completes. When the runBlocking is finished you can say that co1 and co2 are complete but not the order in which they finished.
A function defined with the suspend qualifier can have suspension points, places where its execution can be suspended and resumed at a later date.
All suspendable functions have to be run in a CoroutineScope which only terminates when all of the coroutines it contains finish or fail.
You can create a CoroutineScope using runBlocking which uses Main, i.e. the UI thread, to run any of the coroutines defined within it. As the Main thread is used, runBlocking blocks the program that calls it.
You can use delay or yield to create suspension points within a coroutine.
Coroutines run to completion unless they are suspended or canceled.
The launch method adds coroutines to the dispatcher’s queue and continues. How the coroutine is run depends on the dispatcher.
The coroutineScope method adds a coroutine to the dispatcher’s queue and waits until it finishes.
The behavior of coroutines depends on the dispatcher used. Main dispatcher is a single-threaded dispatcher generally used to update the UI. Default dispatcher has at least two threads and is generally used for CPU intensive tasks and the IO dispatcher typically has 64 threads and is recommended for non-CPU intensive IO tasks.
Sharing resources between coroutines is tricky unless you restrict yourself to a single thread. If not you need to use atomic operations or locks.
If you want a coroutine to return a result use the async method and await the result on the defer object it returns.
Coroutines that have suspension points are generally canceled automatically. If not you need to test the isActive property and stop the coroutine manually.
Flows are asynchronous for loops in function form.
Channels are asynchronous communication “pipes” between coroutines.
This article is an extract from:
You can buy it from: Amazon
<ASIN:1871962706>
<ASIN:B096MZY7JM> | https://i-programmer.info/programming/other-languages/14626-the-programmers-guide-to-kotlin-coroutines.html?start=2 | CC-MAIN-2021-25 | refinedweb | 867 | 62.27 |
Java Profiling with WSAD 5.0
After profiling your modified code, you can observe that the number of "Live Instances" of the DummyClass is 0 and the number of "Collected" instances equals to the number "Total" instances of the DummyClass. (See Figure 8.)
Figure 8
In this example, you have observed that the use of this profiling tool helps you identify and correct memory leaks.
Hands-on Example: Identifying Method Response Times
In the second example, you will identify method response times using the WSAD Profiling Tool. In the class below, you are calling a number of methods that each sleep for a predefined number of seconds. In this example, you will repeat Steps 1–7 from the previous section. (The link to the full source code of this class, ProfileTestTwo.java, is located in the zip file at the end of this article.)
1. package com.profile.examples; 2. 3. public class ProfileTestTwo { 4. 5. public void methodOne() throws InterruptedException { 6. Thread.sleep(1000); 7. } 8. 9. public void methodTwo() throws InterruptedException { 10. Thread.sleep(2000); 11. } 12. 13. public void methodThree() throws InterruptedException { 14. Thread.sleep(3000); 15. } 16. 17. 18. public static void main(String[] args) throws InterruptedException{ 19. ProfileTestTwo testTwo = new ProfileTestTwo(); 20. for (int i = 0; i < 10 ; i++) { 21. testTwo.methodOne(); 22. testTwo.methodTwo(); 23. testTwo.methodThree(); 24. } 25. } 26. }
This time, you are interested in a different Profiling Perspective to observe the results. After the profiling session finishes its execution, right-click on the default monitor --> Open With --> Method Statistics.
Figure 9
You can observe that the methodThree() method takes the longest cumulative time because it spends the longest amount of time in sleeping mode.
Conclusion
In this article, you have learned how to use the WSAD Profiling Tool to identify memory leaks and method invocation times. The tool benefits developers because it is a means to identify issues that are not evident on the surface.
Download the Code
You can download the code that accompanies this article here.
About the Author
Aleksey Shevchenko has been working with object-oriented languages for over seven years. He has been implementing Enterprise IT solutions for Wall Street and the manufacturing and publishing industries.
Page 2 of 2
| http://www.developer.com/java/ent/article.php/10933_3598166_2/Java-Profiling-with-WSAD-50.htm | CC-MAIN-2015-35 | refinedweb | 372 | 57.06 |
Sometime back I was looking for a way to search Google using Java Program. I was surprised to see that Google had a web search API but it has been deprecated long back and now there is no standard way to achieve this.
Basically google search is an HTTP GET request where query parameter is part of the URL, and earlier we have seen that there are different options such as Java HttpUrlConnection or Apache HttpClient to perform this search. But the problem is more related to parsing the HTML response and get the useful information out of it. That’s why I chose to use jsoup that is an open source HTML parser and it’s capable to fetch HTML from given URL.
So below is a simple program to fetch google search results in a java program and then parse it to find out the search results.
Copypackage com.journaldev.jsoup; import java.io.IOException; import java.util.Scanner; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import org.jsoup.nodes.Element; import org.jsoup.select.Elements; public class GoogleSearchJava { public static final String GOOGLE_SEARCH_URL = ""; public static void main(String[] args) throws IOException { //Taking search term input from console Scanner scanner = new Scanner(System.in); System.out.println("Please enter the search term."); String searchTerm = scanner.nextLine(); System.out.println("Please enter the number of results. Example: 5 10 20"); int num = scanner.nextInt(); scanner.close(); String searchURL = GOOGLE_SEARCH_URL + "?q="+searchTerm+"&num="+num; //without proper User-Agent, we will get 403 error Document doc = Jsoup.connect(searchURL).userAgent("Mozilla/5.0").get(); //below will print HTML data, save it to a file and open in browser to compare //System.out.println(doc.html()); //If google search results HTML change the <h3 class="r" to <h3 class="r1" //we need to change below accordingly Elements results = doc.select("h3.r > a"); for (Element result : results) { String linkHref = result.attr("href"); String linkText = result.text(); System.out.println("Text::" + linkText + ", URL::" + linkHref.substring(6, linkHref.indexOf("&"))); } } }
Below is a sample output from above program, I saved the HTML data into file and opened in a browser to confirm the output and it’s what we wanted. Compare the output with below image.
CopyPlease enter the search term. journaldev Please enter the number of results. Example: 5 10 20 20 Text::JournalDev, URL::= Text::Java Interview Questions, URL::= Text::Java design patterns, URL::= Text::Tutorials, URL::= Text::Java servlet, URL::= Text::Spring Framework Tutorial ..., URL::= Text::Java Design Patterns PDF ..., URL::= Text::Pankaj Kumar (@JournalDev) | Twitter, URL::= Text::JournalDev | Facebook, URL::= Text::JournalDev - Chrome Web Store - Google, URL::= Text::Debian -- Details of package libsystemd-journal-dev in wheezy, URL::= Text::Debian -- Details of package libsystemd-journal-dev in wheezy ..., URL::= Text::Debian -- Details of package libsystemd-journal-dev in sid, URL::= Text::Debian -- Details of package libsystemd-journal-dev in jessie, URL::= Text::Ubuntu – Details of package libsystemd-journal-dev in trusty, URL::= Text::libsystemd-journal-dev : Utopic (14.10) : Ubuntu - Launchpad, URL::= Text::Debian -- Details of package libghc-libsystemd-journal-dev in jessie, URL::= Text::Advertise on JournalDev | BuySellAds, URL::= Text::JournalDev | LinkedIn, URL::= Text::How to install libsystemd-journal-dev package in Ubuntu Trusty, URL::= Text::[global] auth supported = cephx ms bind ipv6 = true [mon] mon data ..., URL::= Text::UbuntuUpdates - Package "libsystemd-journal-dev" (trusty 14.04), URL::= Text::[Journal]Dev'err - Cursus Honorum - Enjin, URL::=
That’s all for google search in a java program, use it cautiously because if there is unusual traffic from your computer, chances are Google will block you.
Zohidbek says
How can I treat a leap year program in java? Please, share your ideas!
Aidan says
How can i get this program to only return image URL’s? when i change the GOOGLE_SEARCH_URL to , nothing prints out at the end of the program. My knowledge of java is very basic so if anyone can help me out thank you very much!
Manish Mandal says
We are not gettng the full result html as we normally see in the browser. All the sponsored links are not appearing in the result.
anuja tatpuje says
I am getting this exception
Exception in thread “main” java.io.IOException: 400 error loading URL thoughts&num=2
Star Apple says
Hi,
Thanks for the tutorial. I have a followup question.
What if I want to search multiple keywords in one go in the same way?
Please help !
André Vilela says
It’s easier if you open directly to predefined browser
if (Desktop.isDesktopSupported())
try {
Desktop.getDesktop().browse(new URI(searchURL));
} catch (IOException e1) {
e1.printStackTrace();
} catch (URISyntaxException e1) {
e1.printStackTrace();
}
and you don’t need the number of results, just the searchTerm
String searchURL = GOOGLE_SEARCH_URL + “?q=”+searchTerm
Alex says
Thanks amigo!
sasmi samantaray says
sir,plz tell how can i coding to insert a video in project using jsp and java program…
De Saha says
I am unable to compile since the computer says “package org.jsoup does not exist”
what do I do?
BASANT KUMAR says
Hi,
How to get content for particular search in other words i need to implement URL , Title And body of content also,
in above example i can get only title and URL for particular search.
please reply as soon as possible.
Thanks
Rajinder says
To get body of content you can use following code:
Elements results = doc.select(“span.st”);
for (Element result : results) {
String linkText = result.text();
System.out.println(“Text::” + linkText );
}
Shama says
hello Rajinder Sir
Using above snippet i am only able to retrieve one line of page. how can i retrieve the text of full
page whose URL is being generated. Reply asap.
Thankuuu
Rajinder says
Hi Shama,
By using following snippet you will be able to retrieve text of document in html.
System.out.println(doc.body());
or System.out.println(doc.tostring());
Thanks
Santosh says
Thanks a lot Rajinder Kaur Mam.. your comment helped me to extract html content 🙂
Ps17 says
I am getting an Exception in thread “main” java.net.ConnectException: Connection refused: connect
here Document doc = Jsoup.connect(searchURL).userAgent(“Mozilla/5.0”).get();
Rajinder says
Check your installed Mozilla version or you can try the following statment :
Document doc = Jsoup.connect(searchURL).userAgent(“Mozilla”).get();
Vijay says
Thanks Pankaj
ozan says
How could I get snippets under links as well?
shama says
thankuu its really helpful for me.
Shashank Makkar says
Hi Pankaj,
Thank for sharing this. But I am in a dilemma whether to use their interface like “/search” or not as according to google it’s considered as illegal.
I have also checked their robot.txt file:
interface: /search is not allowed.
So if I use this interface for 10Millions times in my java program, it will definitely create network congestion for Google (particularly on this exposed interface) and the problem to me. Isn’t?
But before that please assist me in screen scraping activity I am doing.
I am trying to fetch data provided by Google up-front like for word-meaning using jSoup:
Thanks in Anticipation | https://www.journaldev.com/7207/google-search-from-java-program-example | CC-MAIN-2019-13 | refinedweb | 1,178 | 57.67 |
ISC Releases the First Look At BIND 10 172.)"
DJB might agree (Score:5, Insightful)
Re: (Score:3, Informative)
Right, much better to write code under some bizarre license, ignore it for years forcing people to distribute patches unto patches, then 6 years later finally realize you're not maintaining the code and never will and finally release it under a sane persons license.
Bizzarro world of DJB haters (Score:3, Interesting)
Enforcing your copyright over original content is a bizarre license scheme? Patching considered bad? Actually doing something you promised is wrong? Public Domain is a license?
Wow, you really have drunk the DJB haters kool-aid.
Re: (Score:2)
Enforcing your copyright over original content is a bizarre license scheme?
Releasing source code, but refusing to allow anyone to modify it and not maintaining it yourself is rather bizarre, yes.
Patching considered bad?
Yes. Forcing people that actually DO want to maintain your code to do so by collecting together a series of patches and apply them to your original code is rather poor software distribution and maintenance.
Actually doing something you promised is wrong?
I never said what he did was "wrong", b
Re: (Score:3, Informative)
(huh? Please describe)
He distributed source code, but didn't allow anyone to modify it. Thus why people distributed a series of patches to the software. People have some strange hero worship of Bernstein, but don't understand that an author who abandons his code but doesn't allow anyone else to modify it isn't deserving of much respect.
(Oh, and there are other free, open source alternatives to BIND, so saying both programs suck in different ways and better alternatives exist is perfectly valid)
Re: (Score:2)
Have you found a DNS program that works faster and is more stable and secure than the current version of tinydns yet? Just curious.
Dan's very possessive of his software, like most people who write 99% of their own code, and doesn't believe in modern Copyright (thus the unofficial open source status of his software), but he does write very good code, and its in use by a lot of people for that reason.
Re: (Score:2)
Could apply to any version of BIND.
That was my first thought, having given up on BIND years ago in favor of the vastly more efficient, user-friendly, and -- most importantly -- bug free djbdns.
After all this time, the best they can do is something they themselves admit is crap, and they plan to take years to make it less crappy? That's really stunning, and not in a good way. We are, after all, talking about a key/value store. Thank goodness they didn't try something that wasn't appallingly well-understood already.
Re: (Score:2)
What definition of "bug free" are you using there? Is it the one where DJB pretends bugs don't exist for years by handwaving them as user error? And how is a piece of software user-friendly or efficient when you have to install the author's NIH-syndrome init and xinetd replacements just to use it?
Re: (Score:2)
I'm sorry, which version of xinetd and init tracks both the daemon and its logger daemon as a unit and ensures they are always piped together?
Re: (Score:2)
Do you have a citation for that?
I know of exactly one DJBDNS bug:
djbdns<=1.05 lets AXFRed subdomains overwrite domains [gmane.org]
Afaik that bug was acknowledged (and paid for) rather quickly.
As a happy djbdns user I'd be curious to learn about other bugs that I've missed?
how many times are they going to rewrite it? (Score:2)
Re: (Score:2)
I thought bind 9 was a rewrite from scratch? They did such a crappy job, they have to do it again for 10?
Yes.
Next question?
djbdns users register here (Score:2)
Yes, yes, we realize djbdns is far more secure. And that DJB is ornery.
Instead of peppering the whole forum with "djbdns is great", just respond to this thread.
Frist!
Re: (Score:2)
I was thinking "::crickets chirping::", but
... your version is a bit more, uh, colorful.
What's the point of a rewrite... (Score:2)
...if you're doing it to end up with new code that is "inefficient, difficult to work with, and riddled with bugs"?
Was the original code too efficient, well-commented and well-tested and they couldn't live with that?
Re: (Score:3, Funny)
Why, backwards-compatibility with BIND 8 and 9, of course!
A Monument to "Software Engineering" (Score:2) deve
Re:A Monument to "Software Engineering" (Score:5, Insightful)
BIND is thirty years old and a core piece of Internet infrastructure.
Actually, BIND 9 -- "the most popular DNS implementation on the Internet," according to the submitter -- is merely 10 years old, and was itself a major rewrite of BIND 8. BIND 8 was only declared "end of life" in 2007.
That a completely new design and re-write of such a fundamentally important piece of software is "inefficient, difficult to work with, and riddled with bugs" highlights the continuing immaturity of the computer software industry.
Really. So the fact that a software developer plans to take "the next couple of years" (again, re: the submitter) to complete a software project is symptomatic of the total failure of an entire industry. Interesting perspective. Thanks for that.
Re: (Score:2)
Re: (Score:2).
Re: (Score:2)
Wouldn't this be an ideal target for test driven development
Depends on the difficulty of running meaningful tests. Moreover, testing an application architecture is rather more difficult than testing individual units that plug into such an architecture. (One of the goals of an architecture ought to be that it allows the testing of modules plugged into it without doing a full run of the whole mess, i.e., that it enables TDD. Getting to that stage isn't trivial; if you think it is, that's probably because you've never tried writing one for real, and have just been leve
Does not look great, honestly. (Score:2)
So instead of 1 daemon I'll now get 3-4 running daemons interacting in strange ways? Thanks, that's exactly what I need.
How about scriptability and/or custom resolvers? Nope, none of this.
Oh well, probably I should switch to DJBDns. It also uses a ton of daemons, but at least it's architectured properly.
That's "designed" (Score:2)
"Architecture" is a noun. "Design" is a verb (or a noun). There's no "architectured".
Re: (Score:2)
We've just witnessed the birth of a new buzz word.
Re: (Score:2)
The OED begs to differ. It has an entry for "architecture" as a verb, and quotes some major English writers as sources.
Re: (Score:2)
"Architecture" is a noun. "Design" is a verb (or a noun). There's no "architectured".
I thought any noun could be verbed.
What's so hard about this? (Score:2)
Most of the trouble with BIND stems from the fact that it's a database app with its own database implementation. BIND10 uses SQLite, which already works. That ought to simplify the thing enormously.
Building in a web server for BIND administration is probably the source of much of the complexity.
Re: (Score:2)
Re: (Score:2)
Why should everything use the same database? A file system is a type of database. SQL is another. Each has it's own purpose. SQLite is contained in a file anyways. A separate database server wouldn't have to be setup for this.
Generic back-end (Score:2)
The design for BIND 10 allows for generic back-ends. We implemented SQLite as the first one, simply because it was the easiest. One of our early goals for the second year of development is to support additional database back-ends (we call them "data sources"), including MySQL, PostgreSQL, and an in-memory 'database' (for performance-critical environments).
In the end we'll also support more exotic back-ends, like BDB, LDAP, directories, and possibly even the tinydns data format.
[ disclaimer - I am the BIND 1
Re: (Score:2)
First of all I agree, building a webserver for something as critically important as a DNS resolver is completely asshat if that is what they are doing.
But I disagree with you. Any dns resolver should be as complete an island as possible, depending on as little as possible, the fewer other subsystems it has to rely on the less points of failure there are.
This should be a very straight forward hash table, loaded from into ram, all entries mapped to either upper or lower case and then the queries hashed and t
Re: (Score:2)
BIND isn't a DNS resolver.
Re: (Score:2)
Ummm...this "database" isn't relational, there's no inner joins or anything like that (at least there shouldn't be), it's a one-to-one lookup (text string to IP address).
It's not the sort of thing which takes ten revisions just to get to a state where it's "inefficient, difficult to work with, and riddled with bugs".
Re: (Score:2)
DNS is not naturally a data structure suitable for relational databases. Any SQL is a bad choice because SQL is a bad choice. Something like Berkeley DB might have been better, or perhaps some of these [wikipedia.org].
Re: (Score:2)
They could've learned from how fast one of their detractors' systems work -- tinydns uses a BDB-like database system for storage as well, and is extremely fast. I think there are even more problems with how BIND handles memory management and historically doesn't understand that resolving and serving are completely different concepts.
Years? (Score:2)
These problems will all be fixed over the next couple of years
I admit complete ignorance in this area, so please educate me if this sounds stupid -- but surely writing a DNS server can't be that hard?
Re: (Score:2)
Are you kidding? It is software written by committee which always sucks. What other examples, try http, css, xhtml, xml, etc. etc. the list is endless.
Additionally the entire DNS system is one pile of legacy crap with a on of kludges to support this or that interest group.
Just be glad there are alternatives.
And you are correct, it should just be a database that responds to a very simple query, here is the domain name, here is the record type, return the IP address.
But it is far more then that. Depending
Re: (Score:2).)
Re: (Score:2)
Re: (Score:2)
Ok, if you want it to simply carry out lookups and return answers then fair enough.
If how ever you want to do more a quick set of things to consider (this is purely off the top of my head)
0. Security
1. Validation of the various record types
2. Caching of lookups
3. Proper use of the dns heirarchy
4. Security
5. Should be easy to manage
6. Zone transfers
7. Speed... slow dns will be no use to man nor beast
8. Security
9. Compliant to the relevant RFC's
10. Dynamic DNS support
Ok, I've put security in a few times but i
What is being thrown out? (Score:2)
Which major features in bind9 are going to be thrown out (and stay out even beyond beta) for bind10?
Yet again (Score:2)
Seriously? The idea is to go for yet another rewrite? And it sounds like it's going to be a half-assed database backing (SQLite? Is this right?)? Why not just move to an abstracted storage backend, and let the admin pick what works for him (or write his own backend plugin)? You know, like PowerDNS has been doing for awhile now. Seriously, guys, let's just stop using BIND and move to a better nameserver; it really seems like ISC is going to be rewriting BIND until the heat death of the universe.
Re: (Score:2) [slashdot.org]
riddled with bugs (Score:2)
'This code is not intended for general use, and is known to be inefficient, difficult to work with, and riddled with bugs.'
If this is indeed a true statement this code is doomed and should be thrown away right now.
If they don't do it right from the start they will spend the rest of forever turd-polishing.
But what about the bloat? (Score:2)
There's no mention of the bloat of BIND9. Will it be carried into BIND10? Are they reimplementing all the bloat from the ground up?
I'll stick with NSD [nlnetlabs.nl] and Unbound [unbound.net].
Re: (Score:2)
Well they're probably not going to cull features and probably going to design more efficiently, but it raises the question - what's better about this rewrite than, say, unbound, with several years' head-start in the rewrite race?
Bind? (Score:2)
Is there still a lot of Bind users out there?
NSD and Unbound are way better, but they aren't the only worthy alternatives.
Future direction? (Score:2)
DNS for IPv6 will have to know a whole lot more about which address to dish out 1st than current versions of BIND and I'm not sure how long it will take to get a good handle on that problem.
I'm old school so I like dedicated hardware for my DNS servers. I run bsd jails that don't have anything but bind running. I used to run solaris servers that had init running named running off a read only scsi disk that was shared with another server. Init ran another program that would mount the file system read only,
Re: (Score:2)
The reason why anyone would need to do all that was both BIND4 and BIND8 were pieces of crap. BIND9 was a bit better but still...
Anyway, if it's a different team doing BIND10, maybe they might produce something better.
Re:Great. Just what the DNS infrastructure needs (Score:5, Insightful)
Yes. As opposed to hacking any new functionality that's needed into all that existing cruft and introducing subtle, hard-to-understand bugs and security vulnerabilities. Which is the trade-off, after all.
(We don't have to stop all development on anything new in the future ever just because we have one mature codebase. It's not like we're all deploying the stuff tomorrow.)
Re: (Score:2)
In my opinion, if you're going to start over, you start a new project. You start small, and you build a solid base of code. You don't get something that the authors admit is "riddled with bugs"
BIND 10 committee metings (Score:3, Informative)
There is no "BIND 10 committee", but we do have weekly conference calls. Minutes from these are published on our Trac site: [isc.org]
[ disclaimer: I am the BIND 10 project manager ]
Re: (Score:2)
That still doesn't answer the question, "Why the heck wasn't BIND fixed a long time ago? You've had TWENTY FIVE YEARS!!!!!"
Re: (Score:2, Flamebait)
Seriously. "Riddled with bugs"? The implication is that nobody at ISC knows how to write good software. Not really surprising. Bind 4 was a mess. Bind 8 was a mess. Bind 9 was a mess.
"Insanity: doing the same thing over and over again and expecting different results." (Einstein)
They need to start over using sane software design methodology. That probably means hiring competent software engineers.
Re: (Score:2)
Re:Great. Just what the DNS infrastructure needs (Score:5, Insightful)
Tests are great for finding bug/problems you have already thought about. They are great for making sure that you don't make the same mistake again. However they don't reliably cover things you have not yet thought about. It is also really hard to write tests that cover complicated network interaction... and that is percicely what Bind must do.
Re: (Score:2, Insightful)
Re: (Score:2)
hiring people isn't a solution to anything.
That's like asking someone to figure out how to prevent a situation that has never occurred.
you can plan and plan and plan, but you're not going to have a fallback for everything that can possibly happen.
Re: (Score:2)
It responds with an IP address given a name.
How exactly is that "complicated network interaction"?
Yes, yes.. i know, we have Dynamic updates, DNSSec, etc.. now.. but come on, how hard is it to get the basics solid, then move on to the rest?
Re: (Score:2)
That's arguably why DJB wrote tinydns -- do the simple things well and correctly.
The caching resolver portion however is what allows for cache poisoning attacks and some other interesting Internet security holes in the last decade.
Re:Great. Just what the DNS infrastructure needs (Score:5, Informative)
We wrote lots of tests. (How else would we know it has bugs in it?) This is a somewhat fair criticism of BIND 9, but read the link before you assume we didn't learn any lessons from the past. The unit tests are included in the tarball and coverage results are viewable online [isc.org].
Re: (Score:2)
Dude, you have fucking got to be joking!
155
// should we refactor this code using, e.g, the state pattern? Probably // not at this point, as this is based on proved code (derived from BIND9) // and it's less likely that we'll have more variations in the domain name // syntax. If this ever happens next time, we should consider refactor // the code, rather than adding more states and cases below.
156
157
158
159
160 while (ndata.size() 161 unsigned char c = *s++;
162
163 switch (state)
Re: (Score:2)
Using "s" to refer to a string and "c" to refer to successive characters in it is a common C idiom, and will be immediately understood by any competent C programmer.
Re: (Score:2)
I'm assuming s and c are part of the idiom in this code. And it's good practice to declare variables in the smallest possible scope, and init them at the same time. It sounds like you think it's inefficient, but any decent compiler will optimize away 'c'; it's only there for readabi
Re: (Score:2)
Re: (Score:2)
Yes using i is a common idiom in C when using a throw away integer for loop control, its intent is clear,
In this code ( please go read the rest of it ) the variable c referes to s all over the place and these is nothing really explaining it. While being terse does have its merits as the example you showed indicates ( the scope is limited to a simple 5 line function, that kind of terseness does not belong spread over 50 lines of code.
As an initializer you really have no idea what you are initializing with u
Re: (Score:2)
I'm going to say that s and c are a string and a character, respectively, as s is being treated like a pointer to an array of characters. That being the case, these names are exactly as idiomatic as i.
People are really complaining too much about having a buggy BIND 10 implementation. This is alpha software, with a long life cycle. This software will be expected to last years, so taking a few to make sure all the bugs are ironed out properly is not a big deal. As far as I can tell the development team is app
Re: (Score:2)
Looking at the posted code, it's pretty obvious that s is the input string being parsed and c is the next character being read. I would expect the rest of this function to contain a switch statement providing cases for the next character.
The point of longer variable names is to make the code easier to read. If someone with C experience can look at the code and know what it's doing, then this goal is achieved already.
If I were writing this code, then I'd probably use a parser generator like LEMON rather
Re: (Score:2)
If everyone subscribed to that logic, we would not have Postfix, Firefox, lighttpd, or any other number of important open source Internet software projects.
Re: (Score:2)
both Firefox and lighttpd started out as very small subsets of larger tools, focusing on small code and a lower number of features. From the sound of BIND 10, it sounds like they're shooting for the universe.
Also, Postfix wasn't a rewrite of existing code.
Re: (Score:2)
Re: (Score:2)
If you can't write a new program, practically free of buggy code, you certainly don't have the wherewithall to fix bugs in existing code...
Sendmail certainly came through it's rewrite vastly better than it was before. Other DNS programs, like MaraDNS, have come on the scene, and remain exploit-free for several years now.
Re: (Score:2)
Sure - new codebase, new bugs. A given. What isn't given is why the original developers thought this was a good idea? None of the answers to that question that I can think of are complimentary to what is now core infrastructure to the Internet. Was it not modularly written? Was it horribly insecure, and so badly so that it wasn't considered worth extending?
Bind is now in its tenth revision. You'd think by now that some sort of good, workable framework or design pattern would have evolved by now?
But clearly,
Why BIND 10 is a rewrite (Score?
I view the BIND 10 project in some ways as the DNS version of the Mozilla project - it is an ambitious rewrite, and will take a while to reach maturity. Luckily BIND 9 is still an excellent piece of software, so we have the luxury of enough time to get there.
BIND 9 is 10 years old, and was designed and implemented when the computing and Internet worlds were different than they are today. The architecture of BIND 9 - a monolithic, multithreaded program - does not lend itself well to today's DNS needs. So a new architecture is needed.
Originally we had planned on reusing a lot of the BIND 9 code. After all, like Joel says, it has been field-tested and is known to be high-quality in handling real-world DNS needs. However, the BIND 9 code has very, very high coupling. In order to make a small change or use an excerpt of code, you need to use the BIND 9 memory management system, and the BIND 9 task model, and the BIND 9 socket library, and so on. One of the reasons that BIND 9 needs to be rewritten is to make it possible to use the parts of the software you need to solve your problems without having to understand the entire system.
My theory is that the architectural problems would have been resolved over the decade of active use for BIND 9, as users submitted their patches and the developers periodically refactored the code. Unfortunately the BIND 9 project does not have an active community, either as developers or users. There are lots of people using BIND 9 (surveys put BIND 9 at about 80% of DNS servers on the Internet), but they have no group identity as BIND 9 users, and the direction and development of the software comes almost entirely from within ISC. This means it is an open source project that has resources limited in ways similar to proprietary software. If there was a BIND 9 community, then I think the software would have evolved with the times and a rewrite would not have been necessary.
For BIND 10, we want it to be an actual open source project, not just open source software. We have tried hard to be open and transparent about how BIND 10 is developed, and are trying to make it easy to participate in BIND 10. Hopefully this will be the last time a major rewrite is necessary, and the code base can evolve in any direction it needs to in the future, by maintaining a good connection with the people who actually use it.
[ disclaimer - I am the BIND 10 project manager ]
Re: (Score?
Yes, but where is Netscape today? Rwriting your code from scratch and fading into oblivion is hardly good business. Eventually the code came good but it was too late to save the company.
Re: (Score:2)
My theory is that the architectural problems would have been resolved over the decade of active use for BIND 9, as users submitted their patches and the developers periodically refactored the code.
I doubt that. Having seen open source communities in action, it is very rare that architectural problems get fixed by communities. This is because architecture-by-committee doesn't work. For sanity, you need one person to hold the core architecture in their head and describe it to everyone else. Once things get complicated enough, it is just about impossible for anyone to be that person and it is easier to throw it all away and start over. That's a shame, but how it goes.
Communities tend to build on top of
Re: (Score:2)
You mean like Windows ME? ^^
Re: (Score:2)
Actually I'm pretty sure BIND 9 was advertised as a near-complete rewrite too.
That said, I'm not touching either version ever again after using [cr.yp.to]
Re:Excellent (Score:4, Insightful)
nope, Microsoft has the audacity to claim their bloated buggy crap is suitable for general use.
Re: (Score:3, Funny)
You appear to be confused. DNS stands for Domain Name System, not Does Nothing Satisfactorily.
Re: (Score:2)
worst piece of widely-used network software ever made
uhh, sendmail?
Re: (Score:2)
Re: (Score:2)
I'm having trouble finding recent numbers, but Sendmail was at 42% and falling in 2001, and possibly at 27% in 2008. BIND had around 70% in 2004. So, yeah, BIND is used way more than Sendmail.
Re: (Score:3, Informative)
Why would they even release it if their ground-up rewrite is so pathetic?
'Cause it's open source software, emphasis on "open". It won't be done for another couple of years, but you can look at the work in progress. You can even help write it if you want.
Re: (Score:2)
Basically, someone once wrote a convincing text which says: Release Early, Release Often [catb.org].
It's a release in the sense that we wanted to make it widely available for people to see what ideas we are playing with, and to get feedback and participation.
[ disclaimer - I am the BIND 10 project manager ]
Re:Difficult to work with? (Score:5, Informative)
But what do you mean when you say "difficult to work with"? A code that is difficult to understand/maintain/evolve?
I sure hope not, as those are all specific design goals for the project (and they're among the failings of BIND 9 that made us want to redesign it in the first place). I meant "difficult to use" -- the user interface basically doesn't exist yet.
Re: (Score:2)
What is wrong with the BIND user interface?
You edit a few simple test based config files, is that really so hard?
Re: (Score:2)
The existing BIND 9 mechanism are not hard for your small domains that change rarely, but they don't work if you have tens or hundreds of thousands of domains that you manage, which change on a frequent basis. While this may not be interesting for you, there are many organizations for who this is a daily reality, and BIND 9 doesn't work well for them.
There are also organizations that have existing provisioning systems for large deployments, and would like their DNS to be better integrated... something today
Re: (Score:2)
Only problematic if you are doing it with AXFR. Nobody in their right mind uses AXFR, right?
So you're planning to design a piece of internet backbone s
Re: (Score:2)
I meant "difficult to use" -- the user interface basically doesn't exist yet.
You mean it doesn’t offer you a retarded point-and-click interface?
That’s not a bug. It’s a feature. So people like you don’t touch it.
BIND has a pleasing interface based on text files. Just like any other professional server software.
It doesn't look very understandable to me (Score:5, Interesting)
Well, I took a look at the code, and it's a typical "modern" C++ design. There's a gazillion classes in an "everything-is-an-object" hierarchy, using the latest and greatest "patterns" in superfluously complex ways. Doesn't anybody care about simplicity in design any more? Granted, BIND9 code was a mess, but this IMO is not much of an improvement. Ugly C++ is just as bad as ugly C. For example, why, for the love of God, would you replace a simple enum with a class with a member variable set to a constant value, and with each instance of the class created by a named constructor with a hardcoded constant in it? In src/lib/dns/message.h there are four of these. And what's with all the wrappers? I suppose it's their definition of "extensibility" -- a framework where everything is accessed through wrapped pimpls, so that anybody could change the implementation without changing binary compatibility with... oh, wait, it's an executable, so WTF? When you change something, you have to rebuild it anyway. So all you really get is ugly wrappers over ugly wrappers over actual code. Why do you need these wrappers anyway? What's wrong with boost's base64_encoder, for instance, that you need to wrap it with an encodeBase64 function, which instantiates a 20 line local BinaryNormalizer class in an anonymous namespace, the purpose of which, as far as I can see, is to pad the binary input with zeroes in case some evil application decides to read past the end of the vector. Oh, wait, this is only called from encodeBase64, and the read-past-the-end thing never happens. So WTF?
That's just four files I looked at, and already it's WTF piled on WTF. Maybe I ought to submit it to thedailywtf.com and see if it's accepted...
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
> If you could send critiques like that to the developer list instead of posting them to slashdot,
> it'd have a better chance of getting attention from the other developers
The problem is that I simply don't see what sort of "attention" I would want in such a situation. Yes, I could write up a mile-long list of complaints about the code, but it would not do any good because they would all add up to: "your code sucks; throw it all out and start over". It's not just one little thing or two little things,
Re:How (Score:5, Funny)
Is that pronounced? Does it rhyme with sinned or blind ?
Wined and dined.
Re: (Score:2)
Does it rhyme with sinned or blind ?
Wined and dined.
You winned!
+1 insightful (Score:2)
If they didn't get it right after nine versions then it's probably time to move on.
"...is known to be inefficient, difficult to work with, and riddled with bugs"
Make that "definitely".
And another +1 insightful (Score:2)
I mean for chrissake, how hard can it be to take a domain name and return an IP, and vice versa? It's a database with a coupla queries. Sheesh. And why churn out code that is full of security vulnerabilities? A security vulnerability is a shitty piece of code. Plain and simple.
Re:The unit tests are a bad joke - age and sex (Score:5, Informative)
One of the ideas of BIND 10 is to allow modules to be added to an already running system. Also, we want administrator tools to be able to ask the modules themselves what functionality is available. This allows relatively simple administrative tools to work with changing systems.
In order to do this, we need to have a mechanism for modules to report their capabilities. So, for example "I have a command called 'notify' which can be used to send a notify to my secondary servers, and it takes the parameter 'domain' which specifies the domain to send it from, and an optional parameter 'secondaries' which you can use to limit to a set of secondary servers".
The test code here exercises this generic capability.
[ disclaimer - I am the BIND 10 project manager ]
Re: (Score:2)
Does this mean you are attempting to create a smaller core and then make everything else a module? Something similar to the architecture of Apache HTTPD?
Re: (Score:2)
lmao, and the cause of most of the Internet's DNS issues in the last 10 years. Most of which were predicted and warned about by the very same DJB. This is software -- doing it right is valuable. Doing it wrong when you're shown how to do it right is stupid. | http://tech.slashdot.org/story/10/03/20/0123240/isc-releases-the-first-look-at-bind-10 | CC-MAIN-2014-15 | refinedweb | 5,561 | 72.26 |
Can I using Qt.createComponent() in WorkerScript function?
- Pong_Cpr.E.
I'm new in Qt quick app. (QML)
I wanna create dynamic image table(like in Samegame example) but it using thread to make real-time create by using workerScript.
How to using WorkerScript by Qt function can be passed between the new thread and the parent thread? or can import Qt func in javascript?
example
@//main.qml
import QtQuick 1.0
Rectangle {
width: 800; height: 600
WorkerScript { id: myWorker source: "script.js" } Item { id: container anchors.fill: parent } Component.onCompleted: myWorker.sendMessage({ 'container':container })
}@
@//script.js
WorkerScript.onMessage = function(message) {
// error occur in this below line with TypeError: Result of expression 'Qt.createComponent' [undefined] is not a function.
var component = Qt.createComponent("Imgbox.qml");
if (component.status == Component.Ready) {
var img = component.createObject(message.container);
}
}@
@//Imgbox.qml
import QtQuick 1.0
Image {
id: img
source: "./img.jpg"; clip: true
}@
Thanks.
No, the JavaScript code in a WorkerScript is run in a separate thread, and QML objects should only be created in the main thread, in the same thread as the QML engine.
Are you sure?
I need to do that, and I have watched some code with QML code in Javascript. But you need to configure .qmlproject file. Really I don't know how to do it. But I'm figuring out.
So, Is there someone who got this?
Thanks,
Fernando.
There is no QQmlEngine in the WorkerScript thread, only a JavaScript engine. Attempting to create a component in a thread which has no QQmlEngine cannot succeed. You can create QObjects in a WorkerScript thread, but not QML items (well, you technically could create a QQuickItem, but it wouldn't be usable).
You can interact with certain QML items from within functions in a WorkerScript (e.g., ListModel items are a good example) but some magic happens behind the scenes to allow that. | https://forum.qt.io/topic/11037/can-i-using-qt-createcomponent-in-workerscript-function-63 | CC-MAIN-2017-34 | refinedweb | 314 | 62.24 |
Okay, what I'm trying to do here is set up a some-what simple battle system for a text based game... I'm having some trouble getting it to work correctly though. I want the Player's attack and the Monster's defense to be random each time the program loops (which is also not working), but it keeps giving the same numbers.
I don't have much experience, so any help you can offer would be appreciated, thanks.
#include <iostream> #include <string> #include <cstdlib> #include <ctime> using namespace std; // Variables int charLevel, Floor; int Hp, Atk, Def; int monLevel, monHp, monAtk, monDef; string action, charName, monName; int damageDealt; // Prototypes void battlePhase(); int main() { // Defining the character cout << "What is your name? "; cin >> charName; cout << "\nWhat level are you? "; cin >> charLevel; cout << "\nWhat floor are you on? "; cin >> Floor; cout << "\nWhat monster do you wish to fight? "; cin >> monName; // Player Stats // charLevel defined earlier srand((unsigned)time(0)); Hp = (15 + ( charLevel * 5 )); Atk = (((rand() % 6) + 0) + (charLevel - 1)); Def = (((rand() % 4) + 0) + (charLevel - 1)); // Monster Stats monLevel = (1/2) * Floor + (charLevel - 2); if ( monLevel < 0 ) { monLevel == 1; } monHp = (10 + (monLevel * 2)); monAtk = (((rand() % 4) + 1) + (monLevel - 1)); monDef = (((rand() % 3) + 1) + (monLevel - 1)); while (Hp > 0 || monHp > 0) { battlePhase(); } return 0; } void battlePhase() { // Actual Battle cout << endl << "\nWhat do you want to do? "; cin >> action; if ( action == "Attack" || action == "attack" ) { cout << endl << charName << " attacked " << monName << " for " << Atk << endl; cout << monName << " defended " << charName << " for " << monDef << endl; damageDealt = Atk - monDef; if ( damageDealt < 0 ) { damageDealt = 0; } if ( damageDealt == 0 ) { cout << "\nNo damage!"; } if ( damageDealt > 0 ) { cout << monName << " took " << damageDealt << " damage!"; monHp = monHp - damageDealt; } } else { cout << "\nThat isn't an action! Try typing 'attack'."; battlePhase(); } } | https://www.daniweb.com/programming/software-development/threads/235472/problems-with-simple-c-battle-system-need-help | CC-MAIN-2017-09 | refinedweb | 282 | 67.28 |
Usually imports subroutines and variables from module into the current package. import is not a built-in, but an ordinary class method that may be inherited from
UNIVERSAL.
At compile time, requires the module and calls its unimport method on list. See use on the next page.
Designates the block as a package with a namespace. Without block, applies to the remainder of the current block or file. Sets package variable
$VERSION to version, if specified.
Requires Perl to be at least this version. version can be numeric like
5.005 or
5.008001, or a v-string like
v5.8.1.
If expr is numeric, behaves like require version. Otherwise expr must be the name of a file that is included from the Perl library. Does not include more than once, and yields a fatal error if the file does not evaluate to true. If expr is a bare word, assumes extension
.pm for the name of the file.
Usually cancels the effects of a previous import or use. Like import, unimport is not a built-in, but an ordinary class method.
See the section Pragmatic Modules below.
By convention, pragma names start with a lowercase letter.
At compile time, requires the module, optionally verifies the version, and calls its import method on list.
If list is
(), doesn’t call import.
Normally used to import a list of variables and subroutines from ...
No credit card required | https://www.oreilly.com/library/view/perl-pocket-reference/9781449311186/ch01s13.html | CC-MAIN-2019-30 | refinedweb | 237 | 70.29 |
pthread_create - thread creation
[THR]
#include <pthread.h>.
[XSI]
The alternate stack shall not be inherited.The alternate stack shall not be inherited.
The floating-point environment shall be inherited from the creating thread.
If pthread_create() fails, no new thread is created and the contents of the location referenced by thread are undefined.
[TCT]
If _POSIX_THREAD_CPUTIME is defined, the new thread shall have a CPU-time clock accessible, and the initial value of this clock shall be set to zero.If _POSIX_THREAD_CPUTIME is defined, the new thread shall have a CPU-time clock accessible, and the initial value of this clock shall be set to zero. permission to set the required scheduling parameters or scheduling policy.
The pthread_create() function may fail if:
- [EINVAL]
- The attributes specified by attr are invalid.
The pthread_create() function shall not return an error code of [EINTR]..
None.
fork(), pthread_exit(), pthread_join(), the Base Definitions volume of IEEE Std 1003.1-2001, <pthread.h>
First released in Issue 5. Included for alignment with the POSIX Threads Extension.
The pthread_create() function is marked as part of the Threads option.
The following new requirements on POSIX implementations derive from alignment with the Single UNIX Specification:
-
The [EPERM] mandatory error condition is added.
The thread CPU-time clock semantics are added for alignment with IEEE Std 1003.1d-1999.
The restrict keyword is added to the pthread_create() prototype for alignment with the ISO/IEC 9899:1999 standard.
The DESCRIPTION is updated to make it explicit that the floating-point environment is inherited from the creating thread.
IEEE Std 1003.1-2001/Cor 1-2002, item XSH/TC1/D6/44 is applied, adding text that the alternate stack is not inherited.
IEEE Std 1003.1-2001/Cor 2-2004, item XSH/TC2/D6/93 is applied, updating the ERRORS section to remove the mandatory [EINVAL] error (``The value specified by attr is invalid"), and adding the optional [EINVAL] error (``The attributes specified by attr are invalid").
IEEE Std 1003.1-2001/Cor 2-2004, item XSH/TC2/D6/94 is applied, adding the APPLICATION USAGE section. | http://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_create.html | CC-MAIN-2015-35 | refinedweb | 344 | 58.48 |
The latest code on Github includes a new feature - Runtime Source Dependencies - along with a bugfix for Linux builds which resolves an issue with Runtime Includes.
A runtime source dependency is a source file which you need to have compiled when a given header is included by a runtime modifiable source file. Whilst having the source in a library and using the Runtime Link Library feature solves this problem to an extent, sometimes you don't want to create a whole library and thus the ability to compile in dependencies is a really useful feature. Using this feature simply requires the following lines in a header:
#include "RuntimeSourceDependency.h"
RUNTIME_COMPILER_SOURCEDEPENENCY;
If the header is called SomeFeature.h then the source file SomeFeature.cpp is compiled when any runtime modifiable code is changed which includes this header. Using the same filename as the header is required at the moment, in part due to issues with getting full paths from builds on Linux with GCC.
This brings us to the bug fix on Linux - although we'd implemented a system to get the full path for runtime modified source files from GCC, this hadn't been implemented for runtime includes and making the above changes caught this bug, so it's now been fixed. GCC only embeds the path passed in for __FILE__, which can be a relative path from the compile location, so we have to embed the compile path using the pre-processor define COMPILE_PATH="$(PWD)/"; Note that this isn't required if your build system uses full paths, which the cmake builds do, as does the internal GCC runtime compiler.
Note that I'm experiencing an odd issue on my cmake builds on Linux with makefiles - the Eclipse generated files work well however the makefile ones compile but the SimpleTest program fails to create a glfw window. | http://runtimecompiledcplusplus.blogspot.ca/2013/04/ | CC-MAIN-2018-05 | refinedweb | 308 | 52.53 |
I have two absolute filesystem paths (A and B), and I want to generate a third filesystem path that represents "A relative from B".
Use case:
boost::filesystem
complete
relative ~ relative => absolute
absolute ~ absolute => relative
Boost doesn't support this; it's an open issue — #1976 (Inverse function for complete) — that nevertheless doesn't seem to be getting much traction.
Here's a vaguely naive workaround that seems to do the trick (not sure whether it can be improved):
#include <boost/filesystem/path.hpp> #include <boost/filesystem/operations.hpp> #include <boost/filesystem/fstream.hpp> #include <stdexcept> /** * * * ." * * ALGORITHM: * iterate path and base * compare all elements so far of path and base * whilst they are the same, no write to output * when they change, or one runs out: * write to output, ../ times the number of remaining elements in base * write to output, the remaining elements in path */ boost::filesystem::path naive_uncomplete(boost::filesystem::path const p, boost::filesystem::path const base) { using boost::filesystem::path; using boost::filesystem::dot; using boost::filesystem::slash; if (p == base) return "./"; /*!! this breaks stuff if path is a filename rather than a directory, which it most likely is... but then base shouldn't be a filename so... */ boost::filesystem::path from_path, from_base, output; boost::filesystem::path::iterator path_it = p.begin(), path_end = p.end(); boost::filesystem::path::iterator base_it = base.begin(), base_end = base.end(); // check for emptiness if ((path_it == path_end) || (base_it == base_end)) throw std::runtime_error("path or base was empty; couldn't generate relative path"); #ifdef WIN32 // drive letters are different; don't generate a relative path if (*path_it != *base_it) return p; // now advance past drive letters; relative paths should only go up // to the root of the drive and not past it ++path_it, ++base_it; #endif // Cache system-dependent dot, double-dot and slash strings const std::string _dot = std::string(1, dot<path>::value); const std::string _dots = std::string(2, dot<path>::value); const std::string _sep = std::string(1, slash<path>::value); // iterate over path and base while (true) { // compare all elements so far of path and base to find greatest common root; // when elements of path and base differ, or run out: if ((path_it == path_end) || (base_it == base_end) || (*path_it != *base_it)) { // write to output, ../ times the number of remaining elements in base; // this is how far we've had to come down the tree from base to get to the common root for (; base_it != base_end; ++base_it) { if (*base_it == _dot) continue; else if (*base_it == _sep) continue; output /= "../"; } // write to output, the remaining elements in path; // this is the path relative from the common root boost::filesystem::path::iterator path_it_start = path_it; for (; path_it != path_end; ++path_it) { if (path_it != path_it_start) output /= "/"; if (*path_it == _dot) continue; if (*path_it == _sep) continue; output /= *path_it; } break; } // add directory level to both paths and continue iteration from_path /= path(*path_it); from_base /= path(*base_it); ++path_it, ++base_it; } return output; } | https://codedump.io/share/VfYefQ79P5Qv/1/get-relative-path-from-two-absolute-paths | CC-MAIN-2018-09 | refinedweb | 472 | 52.8 |
It's back! After a two month-ish extended holiday hiatus, I'm back with a new Wednesday wrap-up of all the Tarot Tuesday happenings!
I'd been thinking recently about starting these up again, then yesterday a new member of our Discord channel (waves hello to @viking-ventures) brought up the subject, so I figured the time must be right. We both agreed that it's a good idea to get things back on track before the crowds return (and they always come back when the market goes back up, which is now loooooong overdue to happen... 😊 )
For those new to this, each week this will be the place to look for news about the community, along with a curation type post of all the links that were resteemed for Tarot Tuesday.
Also, if you have any questions about the Steemit Tarot community, please feel free to shout out over at our Discord channel, or drop it down below in the comment section.
NEWS
While this isn't specifically Steemit Tarot news, it does relate to our community, because the news in question pertains to a group that has been extremely supportive of me, and by extension, this account. Yep, I'm talking about my @steemitbloggers (aka #powerhousecreatives) family. I think this video that @zord189 created earlier today demonstrates how amazing this community is (and be sure to keep an eye open for the word, "unique" because three guesses who's face it's plastered over...LOL!)...
That being said, I've already done some begging & pleading in a post or two, so I'll just say this little bit again - if you want to do @steemittarot a wicked solid by casting a vote for @steemitbloggers to win a 10K SP delegation, here's how you do it -
- Click the below link.
- Login to steemconnect
- Select 'steemitbloggers'
- And you're done!!
Also, if you're interested in joining our family, here's how to go about it -
So who wants join the POWER HOUSE CREATIVES?
Now on to the cartomancy coolness! These are the card-related posts resteemed by the account yesterday.
TAROT TUESDAY RECAP
Woodlands Livestream #30 Playing Woodlands! 1/27/2019 2PM Eastern
by @woodlands
Oracle Card Board Game Show...
Long Format Game Play & 7 Card Chakra Reading.
If you would like a 4 Card Reading
Donate $20.00 via the Paypal link in banner.
Sunday 1/27/2019 2PM
Tarot Tuesday reading for the week of January 29 2019
by @traciyork
Hello!
The Policy of Truth: Tarot and Clubbing in DC
by @roomerkind
Cagney's was the best place if you wanted to dance. It was a wonderful place, smack dab in the center of everything right on Dupont Circle. The owner of Cagney's called everyone Sport, and sported a very British mustache (he was British), and he ran a fun establishment. His name was Jack. Cagney's had curtained booths in the back where I did Tarot readings because Jack was cool about letting me do readings and charging for them, and so it was the perfect place for Tarot. This eventually led to me doing Tarot at The 15 Minute Club so named because everyone gets their 15 Minutes of Fame.
The devil card. XV
by @kerbcrawlerghost
This card has a great influence in all of us, its about addictions, lust, temptation, darkness, domination, bondage, false truth, fears... but at the same time its about physical pleasure, luxury and creativity...the path of self satisfaction at all costs. Very dangerous but fun... would you live your life deliciously?
Weekly Cards are Up. Pick one and have some fun! by Sunscape
by @sunscape
Sunscapes Weekly CardsPick a card from the following to receive this weeks timely advice that your spirit/soul guides would like to share with you.
- Choose the card that feels right to you, number 1, 2 or number 3.
- Choose before you scroll down to see what the cards have to say.
- Enjoy this magical and fun way to see what spirit has to say to you.
- This is purely for entertainment purposes only.
- If you are drawn to a second card as well, then that would represent an underlying energetic movement working in your field as well.
💖Tarot Tuesday 🌟Top of the Morning Tarot 🌟 😍🌎🌙🏆💰
by @kimmysomelove42
Today's picks are from my Native American Tarot Deck by Magda and J. A. Gonzalez.
You can pick card 1, 2, 3 or any combo of them..
Tarot Tuesday - Lenormand Introduction
by @viking-ventures
I've been reading cards for a number of years now. For me, it's a way of connecting to the ether around all of us and finding the answers that lurk just out of reach.
Tarot didn't really speak to me - although I still keep a deck, I don't work with it often because it requires more of an intuitive level that doesn't work well with me. I like structure and concreteness quite a bit with cards. That's why one of my favorite decks is actually my Mahjong Oracle. I will share that one sometime, but not yet.
Daily Tarot card : 27
by @evayer
Three of Wands - A great opportunity is coming your way and you know what you need to do because everything around your plans is going smoothly... (continued on post)
Congratulations @steemittarot!
Thank you, @steemitboard!
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Thanks for the shout out, @steemittarot! 😉 💜
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit | https://steemit.com/steemittarot/@steemittarot/january-30th-2019-steemit-tarot-weekly-wednesday-wrap-up | CC-MAIN-2019-22 | refinedweb | 931 | 71.55 |
Let's see a simple example where a smoothing parameter for a Bayesian classifier is select using the capabilities of the Sklearn library.
To begin we load one of the test datasets provided by sklearn (the same used here) and we hold 33% of the samples for the final evaluation:
from sklearn.datasets import load_digits data = load_digits() from sklearn.cross_validation import train_test_split X,X_test,y,y_test = train_test_split(data.data,data.target, test_size=.33, random_state=1899)Now, we import the classifier we want to use (a Bernoullian Naive Bayes in this case), specify a set of values for the parameter we want to choose and run a grid search:
from sklearn.naive_bayes import BernoulliNB # test the model for alpha = 0.1, 0.2, ..., 1.0 parameters = [{'alpha':np.linspace(0.1,1,10)}] from sklearn.grid_search import GridSearchCV clf = GridSearchCV(BernoulliNB(), parameters, cv=10, scoring='f1') clf.fit(X,y) # running the grid searchThe grid search has evaluated the classifier for each value specified for the parameter alpha using the CV. We can visualize the results as follows:
res = zip(*[(f1m, f1s.std(), p['alpha']) for p, f1m, f1s in clf.grid_scores_]) subplot(2,1,1) plot(res[2],res[0],'-o') subplot(2,1,2) plot(res[2],res[1],'-o') show()
The plots above show the average score (top) and the standard deviation of the score (bottom) for each values of alpha used. Looking at the graphs it seems plausible that a small alpha could be a good choice.
We can also see thet using the alpha value that gave us the best results on the the test set we selected at the beginning gives us results that are similar to the ones obtained during the CV stage:
from sklearn.metrics import f1_score print 'Best alpha in CV = %0.01f' % clf.best_params_['alpha'] final = f1_score(y_test,clf.best_estimator_.predict(X_test)) print 'F1-score on the final testset: %0.5f' % final
Best alpha in CV = 0.1 F1-score on the final testset: 0.85861
I am reading your post from the beginning, it was so interesting to read & I feel thanks to you for posting such a good blog, keep updates regularly.
Regards,
Python Training in Chennai|Python Taining | http://glowingpython.blogspot.it/2014/04/parameters-selection-with-cross.html | CC-MAIN-2017-30 | refinedweb | 370 | 57.37 |
GCJ: The GNU Compiler for Java
By Weiqi Gao, OCI Principal Software Engineer
January 2003
Introduction..
Getting GCJ
GNU/Linux
GCC is bundled with most Linux distributions. To install GCJ on a Red Hat Linux 8.0 system, simply run
rpm -i gcc-java-3.2-7.i386.rpm
rpm -i libgcj-3.2-7.i386.rpm
rpm -i libgcj-devel-3.2-7.i386.rpm
Windows
Windows users need to visit the MinGW Download page and get the following two files:
and install both into a common directory (
C:\MinGW will work fine.)
Other Platforms
Users of other platforms may get binary distributions of GCC from their favorite download centers.
A First Look at GCJ
For the rest of the article, I will use Red Hat Linux 8.0 as my platform. The differences between platforms are mostly limited to filename extensions.
GCJ has two parts: the compiler and the runtime support library. The compiler includes these commands:
gcj
the GCJ compiler
gcjh
generates header files from Java class files
jcf-dump
prints information about Java class files
jv-scan
prints information about Java source files
The runtime support includes:
libgcj.so.3
GCJ runtime suppport library
libgcj-3.2.jar
Java class files of core GCJ classes, automatically searched when compiling Java sources
and commands:
gij
an interpreter for Java bytecode
grepjar
a grep utility that works on jar files
jar
Java archive tool
jv-convert
convert file from one encoding to another
rmic
generate stubs for Remote Method Invocation
rmiregistry
remote object registry
Compiling and Running Java Programs!
- // A.java
- public class A {
- public void foo() {
- System.out.println("A.foo()");
- }
- }
- // B.java
- public class B {
- public static void main(String[] args) {
- new A().foo();
- }
- }
Shared Libraries
The
-shared switch tells gcj to link into a shared library. Assume the following sources:()
Debugging with GDB,
cont controls flow of execution,*>*)'
How Do I ...?
GCJ is sufficiently different from other Java implementations that we need to pay attention to how things are done with it.
Searching For Classes
- the CLASSPATH
Note that if a class is loaded from the CLASSPATH, it is interpreted.
Setting Properties on Invocation
Compiled Native Interface.
Invocation API++
Uses of GCJ.
Summary.
References
- [1] The GCJ home page
- [2] GCJ documentation page
- [3] The GCJ FAQ
- [4] The GCJ status page
- [5] The GCC home page
- [6] The GCC binaries link page
- [7] The Minimalist GNU for Windows (MinGW)
- [8] The MinGW download page
- [9] The Red Hat RHUG
- [10] The Eclipse project
- [11] Running Eclipse with GCJ
- [12] A Linux Journal article on GCJ by Per Bothner, the originator and architect of GCJ
Software Engineering Tech Trends is a monthly publication featuring emerging trends in software engineering. | https://objectcomputing.com/resources/publications/sett/january-2003-gcj-the-gnu-compiler-for-java | CC-MAIN-2020-10 | refinedweb | 454 | 53.81 |
An application can insert various modules into a stream to process and manipulate data that pass between a user process and the driver. In the example, the character conversion module receives a command and a corresponding string of characters from the user. All data passing through the module is inspected for instances of characters in this string. Whatever operation the command requires is performed on all characters that match the string.
#include <string.h> #include <fcntl.h> #include <stropts.h> #define BUFLEN 1024 /* * These definitions would typically be * found in a header file for the module */ #define XCASE 1 /* change alphabetic case of char */ #define DELETE 2 /* delete char */ #define DUPLICATE 3 /* duplicate char */ main() { char buf[BUFLEN]; int fd, count; struct strioctl strioctl;
The first step is to establish a stream to the communications driver and insert the character conversion module. This is accomplished by first opening (fd = open) then calling ioctl(2) to push the chconv module, as shown in the sequence of system calls in Example 2–2.
if ((fd = open("/dev/term/a", O_RDWR)) < 0) { perror("open failed"); exit(1); } if (ioctl(fd, I_PUSH, "chconv") < 0) { perror("ioctl I_PUSH failed"); exit(2); }
The I_PUSH ioctl(2) call directs the stream head to insert the character conversion module between the driver and the stream head. The example illustrates an important difference between STREAMS drivers and modules. Drivers are accessed through a node or nodes in the file system (in this case /dev/term/a) and are opened just like other devices. Modules, on the other hand, are not devices. Identify modules through a separate naming convention, and insert them into a stream using I_PUSH or autopush. Figure 2–1 shows creation of the stream.
Modules are stacked onto a stream and removed from a stream in last-in, first-out (LIFO) order. Therefore, if a second module is pushed onto this stream, it is inserted between the stream head and the character conversion module. | http://docs.oracle.com/cd/E19253-01/816-4855/appcomp2-7/index.html | CC-MAIN-2014-15 | refinedweb | 327 | 51.99 |
Keywords: TIPS, embedded option, inflation, deflation, term structure
Abstract:
The market for U. S. Treasury Inflation Protected Securities (TIPS) has experienced significant growth since its inception in 1997. As of May 2010, the face amount of outstanding TIPS was about $563 billion, which was roughly 8% of the size of the nominal U. S. Treasury market. The TIPS market has averaged about $47 billion in new issuances each year and has about $10.6 billion of average daily turnover.1 The main advantage of TIPS over nominal Treasuries is that an investor who holds TIPS is hedged against inflation risk.2 Although there are costs to issuing TIPS (Roush, 2008), there appears to be widespread agreement that the benefits of TIPS outweigh the costs. Campbell, Chan, and Viceira (2003), Kothari and Shanken (2004), Roll (2004), Mamun and Visaltanachoti (2006), Dudley, Roush, and Ezer (2009), Barnes, Bodie, Triest, and Wang (2010), Huang and Zhong (2011), and Bekaert and Wang (2010) all conclude that TIPS offer significant diversification and hedging benefits to risk averse investors.
The main contribution of our paper is to point out an informational benefit of TIPS that has been ignored in the literature. Specifically, we uncover the informational content of the embedded deflation option in TIPS. We develop a model to value the embedded option explicitly and we show that the time variation in the embedded option's value is correlated with periods of deflationary expectations. We also show that the embedded option return is economically important and statistically significant for explaining future inflation, even in the presence of common inflation variables such as the yield spread, the return on gold, the return on crude oil, and lagged inflation. We argue that our results should be useful to anyone who is interested in assessing inflationary expectations.
At the maturity date of a TIPS, the TIPS owner receives the greater of the original principal or the inflation adjusted principal. This contractual feature is an embedded put option since a TIPS investor can force the U.S. Treasury to redeem the TIPS at par if the cumulative inflation over the life of the TIPS is negative (i.e., deflation). The first TIPS auction in 1997 was for a 10-year note. Prior to the auction, Roll (1996) dismissed the importance of the embedded option since the United States had not experienced a decade of deflation for more than 100 years. Our paper directly examines the embedded deflation option in TIPS. Using a sample of 10-year TIPS from 1997 to 2010, we estimate that the value of the embedded option does not exceed $0.0615 per $100 principal amount. If we amortize $0.0615 over the 10-year life of a TIPS, the impact on the TIPS yield is very small, which appears to justify Roll's (1996) comment. However, when we add 5-year TIPS to our sample, we find that the estimated embedded option value is much larger, up to $1.4447 per $100 principal amount. If we amortize $1.4447 over the 5-year life of a TIPS, the impact on the yield is about 29 basis points. Furthermore, we find significant time variation in the embedded option values for both 5-year and 10-year TIPS. We show that this time variation is useful for explaining future inflation, even in the presence of widely used inflation variables such as the return on gold, lagged inflation, the return on crude oil, and the yield spread between nominal Treasuries and TIPS. We call this the informational content of the embedded option in TIPS.
To value the embedded option in TIPS, we use a continuous-time term structure model that has two factors, the nominal interest rate and the inflation rate. Since our two factors are jointly Gaussian, we obtain a closed-form solution for the price of a TIPS. Using our closed-form solution, we decompose the price of each TIPS into two parts, a part that corresponds to the embedded option value and a part that corresponds to the inflation-adjusted coupons and the inflation-adjusted principal. This makes our approach different from what is found in Sun (1992), Bakshi and Chen (1996), Jarrow and Yildirim (2003), Buraschi and Jiltsov (2005), Lioui and Poncet (2005), Chen, Liu, and Cheng (2010), Ang, Bekaert, and Wei (2008), and Haubrich, Pennacchi, and Ritchken (2012). These papers show how to value real bonds, but they ignore the embedded deflation option that is found in TIPS. To the best of our knowledge, we are the first to price the embedded option in TIPS and to use its time variation to explain future inflation. Christensen, Lopez, and Rudebusch (2012) estimate the value of the embedded option in TIPS, but unlike our paper they do not use the time variation in the embedded option value to explain future inflation. In addition, Kitsul and Wright (2012) study options-implied inflation probabilities, but they use CPI caps and floors instead of TIPS to fit their model.
When we fit our model to the data, we find that prior to 2002 the embedded option values are close to zero. From 2002 through 2004, the option values have considerable time variation. The overall trend during this time period is increasing option values followed by decreasing option values, with a peak around November 2003. From 2005 through the first half of 2008, there is some variation in option values, but mostly the values are close to zero. Finally, during the second half of 2008 and all of 2009, there is a surge in option values, which outstrips the previous peak value from 2003. We argue that the time variation in option values is capturing the deflation scare period of 2003-2004 and the deflationary expectations that were associated with the financial crisis in 2008-2009. Our results are consistent with those in Campbell, Shiller, and Viceira (2009), Wright (2009), and Christensen, Lopez, and Rudebusch (2010). However, our approach is different since we explicitly value the embedded option in TIPS and we quantify its time variation.
Although our estimated option values for 10-year TIPS are small economically, the option returns are very large. When we stack our option returns into a vector and perform a Wald test, we strongly reject the null hypothesis that the returns are jointly equal to zero (
-value is less than 0.0001). When we perform a similar analysis for 5-year TIPS, we not only reject the null hypothesis that the option returns are jointly equal to zero, but we also reject the null
hypothesis that the option values are jointly equal to zero (both
-values are less than 0.0001). This is consistent with our earlier statement that the embedded option in 5-year TIPS is worth
more than its counterpart in 10-year TIPS. We find similar results when we exclude the period of the financial crisis. Thus our results are not being driven solely by the events of 2008-2009.
To quantify the informational content of the embedded option in TIPS, we construct several explanatory variables that we use in a regression analysis. We use our estimated option values from 5-year and 10-year TIPS to construct two value-weighted indices, one for the embedded option price level and one for the embedded option return. We show that the coefficient on the embedded option return index is statistically significant for explaining the one-month ahead inflation rate (Table 6). The embedded option return index remains significant even when we include control variables such as lagged inflation, the return on gold, the VIX index, and the yield spread. By itself, the embedded option return index explains up to 25% of the variation in the one-month ahead inflation rate (Table 6). When we include our control variables, this number increases to slightly more than 35%. Using our regression point estimate for 10-year TIPS, we find that a 100% embedded option return (which is less than one standard deviation) is consistent with a 0.52% decrease in the one-month ahead annualized inflation rate. Thus our results are economically significant as well as statistically significant. For completeness, we also analyze the significance of our indices for explaining the one-year ahead inflation rate and the out-of-sample inflation rate. For almost all of these regressions, one or both of our embedded option indices is significant while more common variables, such as the return on gold and the yield spread, are insignificant. This is true both in-sample (Table 6) and out-of-sample (Table 12).
We verify our results by performing several robustness checks. First, we argue that liquidity is not a likely explanation for our results (see section 4.6.1). To investigate this, we eliminate the off-the-run securities from our sample (see section 4.6.2) and we re-construct our embedded option indices using only the on-the-run securities, which are the most liquid TIPS. We show that all of our previous regression results continue to hold with on-the-run TIPS (Table 7). Thus our results are not being driven by possible
illiquidity that surrounds off-the-run TIPS (see Fleming and Krishnan, 2012). Second, we alter the weighting scheme that we use to construct the embedded option indices. Instead of using value weights, we construct the indices with weights that favor shorter-term options, longer-term options,
options that are nearer-the-money, and options that are further out-of-the-money. Upon doing this for both 5-year TIPS (Table 8) and 10-year TIPS (Table 9), we find that our results are robust to different weighting schemes. Third, we construct a new explanatory variable (
, option return fraction) that captures the fraction of embedded options in each month that has a positive return. This variable is less sensitive to model specification since any other pricing model
that produces the same sign for the embedded option returns will produce the same explanatory variable. We find that
is statistically significant for the full sample of TIPS and for the
on-the-run TIPS, for both the one-month ahead and the one-year ahead inflation rate (Table 10). Thus even if we ignore the magnitude of the option returns and focus solely on the sign of those returns, we find that the embedded option in TIPS contains useful information for explaining the future
inflation rate. Lastly, we examine the ability of our embedded option indices to explain the inflation rate in the presence of other control variables (Table 11), and we use a rolling window empirical technique to examine the out-of-sample performance of our variables (Table 12). After conducting
all of these robustness checks, we find that our main conclusion is not altered - the embedded option in TIPS contains relevant information for explaining the future inflation rate, out to a horizon of at least one year.
Explaining future inflation has received a considerable amount of attention in the literature. Many explanatory variables for future inflation have been proposed, such as the interest rate level and lagged inflation (Fama and Gibbons, 1984), the unemployment rate (Stock and Watson, 1999), the money supply (Stock and Watson, 1999; Stockton and Glassman, 1987), inflation surveys (Mehra, 2002; Ang, Bekaert, and Wei, 2007; Chernov and Mueller, 2012; Chun, 2011), the price of gold (Bekaert and Wang, 2010), and the spread between nominal Treasury yields and TIPS yields (Stock and Watson, 1999; Shen and Corning, 2001; Roll, 2004; Christensen, Lopez, and Rudebusch, 2010; Gürkaynak, Sack, and Wright, 2010; D'Amico, Kim, and Wei, 2010; Pflueger and Viceira, 2011). Our paper is different since we focus on the embedded option in TIPS rather than on traditional variables such as the return on gold or the yield spread. However, we include some of these traditional variables as control variables in our regressions. This allows us to analyze the marginal contribution of the variables.
The remainder of our paper is organized as follows. Section 2 introduces our model and derives a closed form solution for TIPS and for nominal Treasury securities. Section 3 describes the data. Section 4 presents our empirical methodology, our model estimation results, and our regression results. We focus on in-sample results, out-of-sample results, and robustness checks. Section 5 gives our concluding remarks. The technical details of our pricing model can be found in the appendix.
We use a continuous-time model in which bond prices are driven by two state variables, the nominal interest rate
and the inflation rate
. The evolution of
and
is described by the Gaussian system of stochastic processes
where
is a risk neutral probability measure,
and
are independent Brownian motions under
, and
,
,
,
,
,
,
,
,
and
are parameters. Ang and Piazzesi (2003) show that the inflation rate impacts the mean of the short term nominal interest rate. We use their result as motivation for including the
parameters
and
in equations ( 1)-(2). This makes each of the processes in (1)-(2) more complex than the Vasicek (1977) process, but it allows for a richer set of dynamics between
and
.
In our empirical estimation below, we use both TIPS and nominal Treasury Notes (T-Notes). Section 2.1 describes our pricing model for TIPS, while section 2.2 describes our pricing model for nominal T-Notes. By including nominal T-Notes in our analysis, we are able to increase the overall size of our sample. As a side benefit, we also avoid overfitting the TIPS market, which may help to control for the issues of TIPS mispricing and illiquidity that are raised by Fleckenstein, Longstaff, and Lustig (2010) and Fleming and Krishnan (2012). We discuss liquidity in more detail later in sections 4.6.1-4.6.2.
Both of our pricing models are derived under the
probability measure, which eliminates the need to be specific about the functional form of the risk premia. For example, the inflation
risk premium may be time varying, as shown in Evans (1998) and Grishchenko and Huang (2012), for the UK and U.S. Treasury markets, respectively. Furthermore, if the risk premia happen to be affine functions of
and
, then (1)-(2) are consistent with Barr and Campbell (1997), who show that the expected real
interest rate in the UK is highly variable at short horizons, but it is comparatively stable at long horizons. Our model can support many functional forms for the risk premia since we can always describe the evolution of
and
under the true probability measure and then use a prudent change of measure to arrive at (1)-(2). Thus the risk premia are subsumed by
.
The advantage of specifying the model under
is that the number of parameters is reduced, which makes our model parsimonious. Since the volatility matrix in (1)-(2) is lower triangular, as in Chun (2011), our model has only 9 parameters. In contrast, Sun (1992, p. 603) uses a model with 13 parameters, Lioui and Poncet (2005, pp. 1269-1270) use 17 parameters, and Christensen, Lopez, and Rudebusch
(2010, Table 7) use 28 to 40 parameters. Given the limited data for TIPS, it is important that we keep the number of parameters as small as possible. To avoid overfitting our model to the TIPS market, we use matching nominal T-Notes in our sample, as mentioned earlier. We also perform several
robustness checks, including the construction of an alternative explanatory variable (
, option return fraction) that is less sensitive to model specification. We describe these robustness
checks in more detail later.
Consider a TIPS that is issued at time
and matures at time
. We want to
determine the price
of the TIPS at time
, where
. The principal amount of the TIPS is
and the coupon rate is
. Suppose there are
coupons yet to be paid, where the coupon payments occur at
. If we let
, we can write the TIPS price as
where
denotes expectation at time
under
. The right-hand side of (3) has three terms. The first term is the value of the inflation-adjusted coupon payments, the second term is the value of the inflation-adjusted
principal, and the third term is the value of the embedded option. The inflation adjustment in (3) is captured by the exponential term
for
. In our empirical specification, we use the U.S. Treasury's CPI index ratio to capture the known part of the inflation adjustment.3 The unknown inflation adjustment depends on the stochastic process in (2).
Using (1)-(2), the random variables
and
for
have a joint Gaussian distribution. Thus we can evaluate the expectation in ( 3) to get a closed-form solution for the TIPS price. Our solution
depends on the moments
,
,
,
, and
for
, which are also available in closed-form. We give details in Appendix A.
Consider a nominal T-Note that is issued at time
and matures at time
. We want
to determine the T-Note's price
at time
, where
. The principal amount is
, the coupon rate is
, and there are
coupon payments yet to be paid, at times
. As before, we let
and thus we can write the T-Note's price as
The price in (5) contains two terms. The first term is the value of the nominal coupon payments, while the second term is the value of the principal amount. Since we are pricing a nominal T-Note, there is no explicit inflation adjustment in (5). However, since
in (1) may not be zero, the price
depends not only on
and the parameters in (1), but also on
and the parameters in (2). This sets our model apart from Vasicek (1977).
Like equation (3), our closed-form solution for equation (5) depends on the moments
,
,
,
, and
for
. We give details in Appendix B.
To estimate our model, we construct a monthly time series for the nominal interest rate and for the inflation rate. We obtain our data from the Federal Reserve Economic Database (FRED) at the Federal Reserve Bank of St. Louis. We use the 3-month Treasury Bill rate
as a proxy for the nominal interest rate. We start with daily observations of the 3-month Treasury Bill rate and we extract the month-end observations to get a monthly time series. Other short-term Treasury Bill rates give similar results. To construct a monthly time series for the inflation rate,
we use the non-seasonally adjusted Consumer Price Index for All Urban Consumers (CPI-U), which is released monthly by the U.S. Bureau of Labor Statistics. This is the same index that is used for inflation adjustments to TIPS. We let
denote the value of the CPI-U that corresponds to month
. We define the annualized inflation rate for month
as
, where 12 is the annualization factor. Thus the inflation rate is the annualized log change in the price level, which is consistent
with (4).
We use Datastream to obtain daily price data for all of the 5-year and 10-year TIPS that have been auctioned by the U.S. Treasury through May 2010. We use this daily data to construct the gross market price for each available TIPS on the last day of each month. We use 10-year TIPS since it gives us the longest possible sample period, from January 1997 (the first ever TIPS auction) through May 2010. However, we include 5-year TIPS since the embedded option values for these TIPS are larger due to the lower cumulative inflation. Each TIPS in Datastream is identified by its International Securities Identification Number (ISIN). To verify the ISIN, we match it with the corresponding CUSIP in Treasury Direct. We use abbreviations to simplify the exposition. For example, the ISIN for the 10-year TIPS that was auctioned in January 1997 is US9128272M3. Since US9128 is common to all of the TIPS, we drop these characters and use the abbreviation 272M3. For each TIPS, we obtain from Datastream the clean price, the settlement date, the coupon rate, the issue date, and the maturity date. At the end of each month, we identify the previous and the next coupon dates, and we count the number of coupons remaining. We construct the gross market price of a TIPS as
In (6), the accrued interest is calculated using the coupon rate, the settlement date, the previous coupon date, and the next coupon date, while the index ratio is the CPI-U inflation adjustment term that is reported on Treasury Direct.
In addition to our sample of 5-year and 10-year TIPS, our estimation uses data on 5-year and 10-year nominal T-Notes. There are 21 ten-year TIPS and 7 five-year TIPS in our sample. For each TIPS, we search for a nominal T-Note with approximately the same issue and maturity dates. We are able to match all but one of our TIPS (the exception is January 1999, for which we cannot identify a matching 10-year nominal T-Note). Thus our sample includes 21 ten-year TIPS and 7 five-year TIPS, plus 20 ten-year matching nominal T-Notes and 7 five-year matching nominal T-Notes. For the matching nominal T-Notes, we obtain our data from Datastream.
We include nominal T-Notes in our sample for several reasons. First, nominal Treasury securities are an important input to any term structure model that is used to assess inflationary expectations. For example, see Campbell and Viceira (2001), Brennan and Xia (2002), Ang and Piazzesi (2003), Sangvinatsos and Wachter (2005), and Kim (2009), to name just a few. Second, by including nominal T-Notes in our estimation, we effectively double our sample size in each month, which helps to estimate the model parameters more precisely. Lastly, since the TIPS market is only about 8% of the size of the nominal Treasury market, we avoid overfitting the TIPS market by including nominal Treasury securities. This helps to control for the trading differences between TIPS and nominal Treasuries (Fleming and Krishnan, 2012) and it helps to address, but does not completely resolve, the issue of relative overpricing in the TIPS market (Fleckenstein, Longstaff, and Lustig, 2010). By including nominal Treasuries in our sample, it is less likely that our fitted parameters are capturing TIPS market imperfections that are present in the data.
To summarize, our data set includes monthly interest rates, monthly inflation rates, and monthly gross prices for TIPS and matching nominal T-Notes. Table 1 shows the TIPS and the nominal T-Notes that are included in our sample. There are 1,405 monthly observations for 10-year TIPS (Panel A), 1,268 monthly observations for 10-year nominal T-Notes (Panel B), 256 monthly observations for 5-year TIPS (Panel C), and 250 monthly observations for 5-year nominal T-Notes (Panel D).
Our empirical approach involves several steps. First, we estimate the parameters in (1)-(2) by minimizing the sum of the squared pricing errors for the full sample of 5-year and 10-year TIPS and matching nominal T-Notes (see Table 1). For completeness, we solve similar minimization problems using only 10-year TIPS and matching T-Notes (Panels A and B of Table 1) and using only 5-year TIPS and matching T-Notes (Panels C and D of Table 1). We report results for all three estimations. Second, we use our estimated parameters and our formula for the TIPS embedded option (see equations (42)-(44) in Appendix A) to calculate a set of times series of embedded option values for each TIPS in our sample. We use these time series to construct value-weighted embedded option price indices and value-weighted embedded option return indices. Our option indices, along with various controls, are then used as explanatory variables for in-sample and out-of-sample inflation regressions. In almost all of our regressions, the embedded option return index is statistically significant for explaining the one-month ahead and the one-year ahead inflation rate. We also consider several robustness checks, such as alternative weighting schemes, alternative variable specifications, and additional control variables.
We estimate the parameters in (1)-(2) by minimizing the sum of the squared errors between our model prices and the true market prices. A similar technique is used in Bakshi, Cao, and Chen (1997) and Huang and Wu (2004). Specifically, we solve the problem
where
is the total number of months in the sample,
is the number of TIPS
in the sample for month
,
is the number of nominal T-Notes in the sample
for month
,
is the gross market price of the
th TIPS for month
,
is the gross market price of the
th nominal T-Note for month
,
is the model price of the
th TIPS for month
, and
is the model price of the
th nominal T-Note for month
.
The model prices
and
are given by (3)
and (5), respectively, and the parameter vector is
.
To solve (7), we use Newton's method in the nonlinear least squares (NLIN) routine in SAS. Since (7) is sensitive to the choice of initial conditions, we double check our results by re-solving the problem using the Marquardt method, which is known to be less sensitive to the choice of initial values. In particular, we use a two-step procedure, first using the Marquardt method and then polishing the estimated parameter values using Newton's method. This robustness check provides the same result as using Newton's method alone. For our reported estimates, we verify a global minimum for (7) by checking that the first-order derivatives are zero and all eigenvalues of the Hessian are positive, which implies a positive definite Hessian.
Table 2 summarizes our estimation results. When we estimate our model using all of the TIPS and matching T-Notes from Table 1, we find that the mean absolute pricing error (
) is
$2.717 per $100 face amount. Using only the 10-year TIPS and matching T-Notes, the
increases slightly to $2.953 per $100 face amount. Our mean pricing errors are higher than what is
reported in Chen, Liu, and Cheng (2010), but our sample period is longer than theirs and our model is fit to a wider variation in economic conditions. Our mean absolute yield error (
) is
slightly more than 50 basis points, and there is little variation across the three estimations in Table 2. Our
is comparable in magnitude to the RMSE of 74 basis points reported by Chen,
Liu, and Cheng (2010, p. 715). More broadly, our pricing errors are similar to other models in the literature. If we amortize our
of $2.717 over a ten year period using semi-annual
compounding, we get about 28 basis points per annum. This is similar to the average pricing errors reported in Dai and Singleton (2000, Table IV) for the swaps market using their
model. Our errors appear to be reasonable given that we are using a parsimonious model that is fit simultaneously to two markets, TIPS and nominal T-Notes.
We also estimated our model using only 5-year TIPS and 5-year matching nominal T-Notes. As shown in Table 1, the number of 5-year TIPS during our sample period is one-third the number of 10-year TIPS. Furthermore, we see in Table 2 that the number of monthly observations for 5-year TIPS and
matching nominal T-Notes is about one-fifth the number of monthly observations for 10-year TIPS and matching nominal T-Notes. There is also a gap in the data using 5-year TIPS since the 5-year TIPS that was issued in July 1997 matured in July 2002, and the next auction of 5-year TIPS occurred in
October 2004. However, in spite of these issues, we went ahead and estimated our model using the available monthly 5-year TIPS data from July 1997 - May 2010. As shown in Table 2, the
from this estimation is $1.416 per $100 face amount. Although this is lower than the
from the other two estimations, it should be interpreted with caution since there are only seven
5-year TIPS in our sample.
To check the economics of our estimations, we compute the long-run means of
and
under
, which we denote by
and
, respectively. In Appendix C we show how to derive the formulas for
and
. As Table 2 shows, our estimates of
and
are economically reasonable and are statistically different than zero. For example, using all of the TIPS and matching T-Notes from Table 1, we estimate the long-run mean interest
rate is 5.37% and the long-run mean inflation rate is 2.32%. This implies a long-run mean real rate of 3.05%.
The far right column of Table 2 shows the range of values for the embedded deflation option in TIPS. For all three estimations, the minimum estimated option value is close to zero. For the estimation that uses 10-year TIPS and matching nominal T-Notes, the maximum option value across all TIPS-month observations is $0.0615 per $100 face amount. If we amortize $0.0615 using semi-annual compounding over the 10-year life of a TIPS, we get about 0.6 basis points. Thus on average, ignoring the embedded option on any given trading day has very little impact on the yield of a 10-year TIPS. This may help to explain why most of the existing TIPS literature does not focus on the embedded option.
For the estimation using 5-year TIPS and matching nominal T-Notes, the maximum option value across all TIPS-month observations is $1.3134 per $100 face amount. This is much higher than the $0.0615 per $100 principal amount that we found for 10-year TIPS, but it makes sense because most of the 5-year TIPS were outstanding during the deflationary period in the second half of 2008. In addition, the probability of experiencing cumulative deflation over a 5-year period is higher than the probability of experiencing cumulative deflation over a 10-year period. At the margin, this may be contributing to a higher embedded option value in 5-year TIPS relative to 10-year TIPS. If we amortize $1.3134 over the life of a 5-year TIPS, we find that the embedded option value accounts for up to 27 basis points of the TIPS yield. This is comparable to what is reported in Christensen, Lopez, and Rudebusch (2012), who find that the average value of the TIPS embedded option during 2009 is about 41 basis points.
We find that the estimated value of the embedded deflation option exhibits substantial time variation. Panel A of Figure 1 shows the time series of estimated option values for all 21 ten-year TIPS in our sample. We find a large spike in option values at the end of 2008 and the beginning of 2009. This corresponds to the period of the financial crisis, which was marked by deflationary expectations and negative changes in the CPI index for the second half of 2008. We also find a smaller spike in option values during the 2003-2004 period, which was also marked by deflationary pressure (Ip, 2004). The variation during 2003-2004 is difficult to see in Panel A, but it is more evident in Panel C, which is a zoomed version of Panel A. During most other time periods, the embedded option values are closer to zero. This is intuitive since if cumulative inflation is high, the embedded option will be further out-of-the-money and thus its value should be low.
We find similar results when we estimate our model using the combined sample of 5-year and 10-year TIPS and matching nominal T-Notes. Panel A of Figure 2 shows the estimated option values for all 7 five-year TIPS in our sample, while Panel B of Figure 2 shows the estimated option values for all 21 ten-year TIPS.4 We again find a large spike in option values during the financial crisis (both Panels A and B) and we also find a second spike during the 2003-2004 period (Panel B). Thus including 5-year TIPS does not alter the time variation in the option values.
Our results in Figures 1 and 2 are consistent with the existing literature. Wright (2009), Christensen (2009), and Christensen, Lopez, and Rudebusch (2011) use TIPS to infer the probability of deflation. During the later part of 2008, Wright (2009, Figure 2) shows that the probability of deflation was greater than one-half, which is confirmed by the results in Christensen (2009, Figure 3). Christensen, Lopez, and Rudebusch (2011, Figure 1) provide an estimate of the one-year ahead deflation probability from 1997-2010. Their Figure 1 is strikingly similar to our Figure 1, even though the two figures illustrate different quantities. In particular, their Figure 1 shows the probability that the price level will decrease, while our Figure 1 shows the value of the embedded option in TIPS. We return to this point later in section 4.6.1.
We use our estimated option values to calculate a time series of option returns for each TIPS in our sample. Although the estimated option values are sometimes small (see Figures 1 and 2), the option returns are economically larger. For example, in Panel A of Figure 1, when the embedded option
value increases from $0.01 to $0.06 during the 2008-2009 period, the return is 500%. To test the joint statistical significance of the estimated option values and the option returns, we perform several Wald tests, which are shown in Table 3. Panel A (Panel B) of Table 3 shows the joint test results
for the option values (returns). In Panel A, for the sample of 10-year TIPS, we cannot reject the null hypothesis that the option values are jointly equal to zero. However, for the 5-year TIPS and for the combined sample of 5-year and 10-year TIPS, we strongly reject the null hypothesis that the
option values are jointly zero (the
-values are less than 0.0001). Evidently, these results are being driven by the larger estimated embedded option values that are contained in 5-year TIPS.
In Panel B of Table 3, we strongly reject the null hypothesis that the option returns are jointly equal to zero (all of the
-values are less than 0.0001). This is true for 5-year TIPS, for
10-year TIPS, and for the combined sample of 5-year and 10-year TIPS.
To avoid numerical issues with calculating our option return test statistics in Panel B, we eliminate estimated option values that produce abnormally high returns. These abnormal returns originate in months where the beginning and ending option values have different orders of magnitude, yet both
values are small economically. For example, if an option value moves from
to
, the monthly return is very large, but both of the option values are approximately zero. To control for this effect, we discard option values that are smaller than
. We tried other cutoff values, such as
and
,
but it does not impact our tests in Table 3, nor does it impact our regression results that are shown below in Sections 4.6-4.8. We use a cutoff of
since it maintains a relatively large sample size while avoiding numerical issues with calculating the option return test statistics. Removing the smallest option values from our sample has the effect of trimming outlier returns. Thus our option return tests in Panel B of
Table 3 are not driven by outliers.
We use our estimated option values and option returns to construct explanatory variables for our regression analysis. For the
th TIPS in month
, let
denote the estimated value of the embedded option. Thus the option
return in month
for the
th TIPS is
. For each of our three samples, we construct a value-weighted index for the embedded option price level and a value-weighted index for the embedded option return.
The weight
for the
th TIPS in month
is
, where
is the number of TIPS in
the sample for month
. Note that we use the lagged value
when constructing
the weight
for month
. Thus the value-weighted embedded option price index in
month
is
Panels B and D of Figure 1 show (8) when the model is estimated using 10-year TIPS and matching nominal T-Notes. Likewise, Panel C of Figure 2 shows (8) for 5-year and 10-year TIPS when the model is estimated using all of the bonds in Table
1. We also construct a value-weighted embedded option return index, which for month
is given by
For robustness, we also checked an alternative definition of the option return index, namely
. Under this alternative definition we found no material impact on our empirical results.
We examine the informational content of our variables
and
for explaining the future inflation rate. Suppose
is the value of the CPI-U that corresponds to month
. We define the inflation rate from month
to month
as
where
is an annualization factor. Substituting
in (10) gives the one-month ahead inflation rate, while substituting
in (10) gives the one-year ahead inflation rate. We use ( 10) as the
dependent variable in our regression analysis. In addition to
and
in (8)-(9), our explanatory variables include: (i) the yield spread
, which is the difference between the average yields of the
nominal T-Notes and the TIPS in our sample; (ii) the one-month lagged inflation rate,
; (iii) the return on gold,
, which we calculate using gold prices from the London Bullion Market Association; (iv) the return on VIX,
, which is the return on the S&P 500 implied volatility index; and (v) the value-weighted return on the TIPS in our sample,
.
We include
as an explanatory variable since it is a common measure of inflation expectations. Hunter and Simon (2005) have also shown that the yield spread is correlated with TIPS
returns. We include
since the fluctuation in the price of gold has long been associated with inflationary expectations. Bekaert and Wang (2010) show that the inflation beta for gold in North
America is about 1.45. We include
since its time variation captures the uncertainty associated with macroeconomic activity, as described in Bloom (2009) and David and Veronesi (2011). Lastly, we include
as a control variable to see if the TIPS total return has incremental explanatory power beyond that of the embedded option and our other variables. This allows us to compare
the informational content of the embedded option, which is the focus of our study, to that of the TIPS itself, which is examined by Chu, Pittman, and Chen (2007), D'Amico, Kim, and Wei (2010), and Chu, Pittman, and Yu (2011).
Table 4 shows summary statistics for our explanatory variables. For our sample of 5-year TIPS and matching nominal T-Notes, the mean of the embedded option return index is about 0.474, which is a 47.4% monthly average return. The standard deviation of the 5-year embedded option return index is about 1.90, or 190%. For our sample of 10-year TIPS and matching nominal T-Notes, the mean and standard deviation of the option return index are about 135% and 451%, respectively. The fact that the standard deviations are big coincides with our earlier statement that there is substantial time variation in the option returns. This is also apparent by examining the minimum and maximum values for the option return indices, as shown in the last two columns of Table 4.
Table 5 shows the sample correlation matrix for our explanatory variables. Panel A (Panel B) shows the matrix for 5-year (10-year) TIPS, while Panel C shows the matrix for the combined sample of 5-year and 10-year TIPS. The number in parentheses below each correlation is the
-value for a test of the null hypothesis that the correlation coefficient is equal to zero. If we examine the column for the option return index, we see that the return index in all three panels has a
negative sample correlation with the yield spread, the return on gold, and lagged inflation. This is intuitive since the option return index is more likely to be high (low) during periods of deflationary (inflationary) expectations. We also see that the correlation between the option return index
and the TIPS total return is negative. During periods of deflationary expectations, we would expect investors to shun TIPS in favor of nominal bonds. Thus on average, the TIPS total return is low when the embedded option index return is high. Upon examining the
-values, we cannot reject the null hypothesis that the sample correlation between the yield spread and the option return index is zero. A similar statement holds for the VIX return. For the return on gold, lagged inflation, and
the TIPS total return, the
-values are small and we reject the null that the correlations are zero. However, even for these variables, the magnitude of the coefficients is relatively small.
The numbers vary across Panels A, B, and C, but the gold return and the TIPS total return each have a correlation coefficient with the option return of about -0.25, while lagged inflation has
a correlation coefficient with the option return of about -0.5. Thus it appears that our option return index may be useful for explaining future inflation, even in the presence of these
traditional explanatory variables. We investigate this statement next.
which is shown in Table 6. Panel A uses
(one-month ahead) while Panel B uses
(one-year ahead). In Panel A, our variable
is statistically significant at the 5% level for the sample of 5-year TIPS and is statistically significant at the
1% level for the other two samples.5 This is true even when we include common variables that are known to capture future inflation, such as lagged inflation,
the yield spread, and the return on gold. In Panel B,
is statistically significant at the 10% level (5% level) for the sample of 5-year (10-year) TIPS, and is statistically
significant at the 1% level for the combined sample of 5-year and 10-year TIPS. Since
is insignificant in both panels, the return index
appears to be a more important explanatory variable than the price level index
.
In Panel A of Table 6, note that the VIX return and lagged inflation are statistically significant for all three samples. However, these variables are no longer significant in Panel B. With the exception of a 10% significance for the yield spread in the 5-year sample, the only significant
variable in Panel B is
. While traditional variables are significant for explaining the one-month ahead inflation rate (Panel A), they mostly fail to be significant for the one-year
ahead inflation rate (Panel B). In contrast,
is important over both horizons. Since
is significant for the one-year horizon, our results are not driven by short-term timing differences between measuring inflation and reporting inflation (i.e., CPI-U announcements).6
If we examine the adjusted-
values in Panel A, using the combined sample of 5-year and 10-year TIPS, we find that
alone explains
of the variation in the one-month ahead inflation rate. Once we add all of our control variables, the
adjusted-
increases to 35.6% (see the last column in Panel A). In Panel B,
alone explains
of the variation in the one-year ahead inflation rate, and this increases to 5.2% when we include the full set of control variables. Furthermore,
for all of our regressions in Table 6, the sign of the coefficient on
is negative. This is consistent with our economic intuition. Since the embedded TIPS option is a deflation
option, a higher option return this month (as captured by
) should be associated with a lower future inflation rate.
We find that our results are not only statistically significant, but also economically significant. For example, for the sample of 5-year TIPS in Panel A of Table 6, the coefficient on
is -0.0056 when the control variables are included. Thus a 100% embedded option return, which is less than one standard deviation, predicts a decrease of 56 basis
points in the one-month ahead annualized rate of inflation. If we compare this result to the other variables in the same regression, we find that
is at least as important
economically as the yield spread (coefficient of 0.31 for the 5-year sample) or the lagged inflation (coefficient of 0.28 for the 5-year sample). A one percentage point increase in the yield spread (lagged inflation rate) predicts a 31 basis point (28 basis point) increase in the one-month ahead
annualized rate of inflation.
For the sample of 10-year TIPS in Panel A of Table 6, the coefficient on
is -0.0031 when the control variables are included. This is lower than the coefficient of -0.0056 for 5-year TIPS. However, using Table 4, we see that
for 5-year TIPS has a lower mean and standard deviation than
for 10-year TIPS. If we multiply
the regression coefficient for
times the expected option index return, we get 27 basis points (42 basis points) for the sample of 5-year (10-year) TIPS. Likewise, if we multiply the
regression coefficient for
times the standard deviation of the option index return, we get 107 basis points (140 basis points) for 5-year (10-year) TIPS. The economic significance
tends to be slightly higher when we estimate our model using 10-year TIPS.
In Panel B of Table 6, the coefficients on
are lower than their counterparts in Table A. For example, using the 5-year (10-year) sample of TIPS, a 100% embedded option return
predicts a decrease of 14 basis points (6.6 basis points) in the one-year ahead inflation rate when the control variables are included. If we multiply the regression coefficient for
times the standard deviation of the option index return in Table 4, we get 27 basis points (30 basis points) for the sample of 5-year (10-year) TIPS. In both cases, the economic significance is lower than what we find in Panel A.
In summary, it appears that
contains relevant information for future inflation out to a horizon of at least 12 months. The VIX return and lagged inflation are important at the
one-month horizon, but none of the control variables, with the exception of the yield spread for 5-year TIPS, are significant at the one-year horizon. In Table 6, our variable
is
the only variable that is consistently significant. Given the evidence from Table 6, we conclude that the embedded option in TIPS contains useful information about future inflation.
Panel C of Table 5 shows that the sample correlation between the option price index and the yield spread is -0.495 (
-value is less than 0.0001). We interpret this as evidence that our variable
is capturing deflationary
expectations - as inflation falls, the yield spread should decrease and the option value should increase. This interpretation coincides with the main results in Christensen, Lopez, and Rudebusch (2011). Their Figure 1, which shows the estimated probability of deflation, is strikingly similar to our
Figure 1, which shows our embedded option values. Both figures have peaks during the 2003-2004 and 2008-2009 periods, which are known periods of deflationary expectations.
We also compare our results to those in Wright (2009). Figure 1 in Wright (2009) shows the yields on two TIPS that have similar maturity dates but different issue dates. The two TIPS are the 1.875% 10-year TIPS with ISIN ending in 28BD1 and the 0.625% 5-year TIPS with ISIN ending in 28HW3. In
spite of the higher real coupon rate on the 10-year TIPS, Wright's Figure 1 shows that the 10-year TIPS yield is higher than the 5-year TIPS yield during the last few months of 2008 and the first half of 2009. Wright (2009, pp. 128-129) argues that the yield difference between these two TIPS is
mostly due to differences in the deflation option value and not due to liquidity. In other words, the embedded deflation option in the 5-year TIPS is worth more than the embedded deflation option in the 10-year TIPS, which coincides with our summary statistics in Table 4. We verify Wright's (2009)
conclusions by using our TIPS option pricing model. The results are shown in our Figure 3. Panel A of Figure 3 reproduces Wright's Figure 1, while Panel B of Figure 3 shows the yield difference, which is the 10-year TIPS yield minus the 5-year TIPS yield. Panel C of Figure 3 plots our estimated
option values for these two TIPS, while Panel D of Figure 3 shows the option value difference, which is the 5-year TIPS option value minus the 10-year TIPS option value. If we compare Panels B and D, we find that the option value difference closely tracks the yield difference. The biggest
difference in yields and option values occurs in the Fall of 2008, which was a deflationary period. When we regress the yield difference in Panel B onto the option value difference in Panel D, we get an adjusted-
of 75.5%. Thus our results are consistent with Wright's (2009) conjecture that the yield difference between on-the-run and off-the-run TIPS is mostly due to different embedded option values.
To investigate whether liquidity is a contributing factor in our results, we reconstruct the option indices in (8)-(9) using only on-the-run TIPS for each sample. Typically, the on-the-run TIPS is more liquid than any of the off-the-run TIPS. For example, Table 3 and Chart 1 in Fleming and Krishnan (2012) show that trading volume is substantially higher for on-the-run TIPS as compared to off-the-run TIPS. In addition, Fleming and Krishnan (2012, p. 7) report that about 85% of the time, the off-the-run 10-year TIPS has only a one-sided price quote (a bid or an ask, but not both) or no price quote at all. In other words, the quote incidence for off-the-run TIPS is much lower than that of the on-the-run TIPS. Since off-the-run TIPS are not as liquid, we eliminate these bonds from each sample when we reconstruct the indices in ( 8)-(9).
Our regression results using only on-the-run TIPS are shown in Table 7. In Panel A of Table 7, the economic and statistical significance of
is very close to that of Panel A in
Table 6. We continue to find that lagged inflation and the VIX return are significant, but the statistical significance of the VIX return in Panel A of Table 7 for the sample of 10-year TIPS is reduced slightly relative to its counterpart in Table 6. In Panel B of Table 7, the statistical
significance of
is reduced slightly relative to what is shown in Panel B of Table 6. However, our variable
is the only significant variable in Panel B of Table 7. Traditional variables such as the lagged inflation and the VIX return are significant for explaining the one-month ahead inflation (Panel A of Table 7), but they again fail to be significant for the
one-year ahead inflation (Panel B of Table 7). In contrast, as we showed earlier,
is important over both horizons.
The results in Table 7 suggest that illiquidity is not a main driver of our results. Even after discarding the most illiquid TIPS in each sample (i.e., the off-the-run TIPS), we still find that the embedded option index return
is a useful variable for explaining the one-month ahead and the one-year ahead inflation rate.
Our prior results suggest that the embedded option in TIPS contains useful information about the future rate of inflation. We now investigate whether our results are robust to changes in our modeling assumptions and our empirical approach. Specifically, we examine alternative weighting schemes for calculating the indices in (8)-(9), we consider an alternative option-based explanatory variable that is less sensitive to our model specification in (1 )-(2), and we consider an additional control variable that helps to capture future inflation. Lastly, in section 4.8 below, we investigate out-of-sample inflation forecasting using our embedded option explanatory variables.
In (8)-(9), we used value weights to construct the variables
and
. In this section, we reconstruct the variables
and
by using a variety of alternative weighting schemes. We then use these reconstructed variables in a regression analysis to see if our earlier results are sensitive to the choice of weights.
We first consider weighting schemes that are based on maturity. Following Section 4.4, let
denote the number of TIPS in our sample in month
. Suppose the
th TIPS in month
has a remaining time to maturity
, which is measured in years. We use
to construct a set of maturity weights, where the weight assigned to the
th TIPS in month
is
Upon substituting (12) into the right-hand side of (8)-(9), we get a new pair of explanatory variables,
and
. The variable
is a maturity-weighted option price
index while the variable
is a maturity-weighted option return index. Given the weighting scheme in (12), longer term options are assigned larger weights. We also construct a pair
of explanatory variables that favors shorter term options. To do this, the weight assigned to the
th TIPS in month
is
where
is the original maturity of the
th TIPS. Upon substituting (13) into the right-hand side of (8)-( 9), we get a new pair of explanatory variables,
and
. The variable
(
) is an option price (option return) index that favors shorter term options.
Next, we consider weighting schemes that are based on moneyness. Using equation (42) in Appendix A, the embedded option's strike price divided by the inflation-adjusted face value for the
th TIPS in month
is
where the exponential term in (14) is the inflation adjustment factor. As discussed in Section 2.1, we substitute the U.S. Treasury's CPI-U index ratio for the inflation adjustment factor. Thus
in (14) describes the moneyness of the embedded option. The inflation rate in our sample is usually positive, so almost all of the embedded options are out-of-the-money. However, we can use
to construct explanatory variables that depend on the level of option moneyness. For example, to favor nearer-to-the-money (NTM) options, the weight assigned to the
th TIPS in month
is
Alternatively, to favor deeper out-of-the-money (OTM) options, the weight assigned to the
th TIPS in month
is
where the number 1 represents an at-the-money option. Upon substituting (15) into the right-hand side of (8)-( 9), we get a new pair of explanatory variables,
and
. These are the moneyness-weighted option price and option return indices that favor NTM options. Similarly, upon substituting ( 16) into the right-hand
side of (8)-(9 ), we get
and
. These are the moneyness-weighted option price and option return indices that favor deeper OTM options.
Table 8 shows the regression results when we use our alternative weighting schemes for the sample of 5-year TIPS. Panel A (Panel B) shows the results when the dependent variable is the one-month (one-year) ahead inflation rate. Table 9 is similar but shows the results for the sample of 10-year
TIPS. Columns 1, 3, 5, and 7 of each table are univariate regressions that use
,
,
, and
, respectively, as the explanatory variable. In both panels of Tables 8 and 9, the coefficients on these variables have the correct sign and are statistically significant at
either the 1% level or the 5% level. In columns 2, 4, 6, and 8 of each table, we add several additional explanatory variables. In Panel A of Table 8, we see that lagged inflation, the VIX return, and the TIPS total return are statistically significant, which mirrors our results in Panel A of Table
6 for 5-year TIPS. In Panel B of Table 8, the yield spread is statistically significant, which mirrors Panel B of Table 6 for 5-year TIPS. Likewise, the VIX return and lagged inflation are significant in Panel A of Table 9, but none of the control variables are significant in Panel B of Table 9.
This mimics our results in Panels A and B of Table 6 for 10-year TIPS.
Chu, Pittman, and Chen (2007) show that the market price of TIPS contains useful information about inflation expectations. Our results in Tables 6-9 provide limited support for their conclusion. Specifically, in Panel A of Table 6, using the sample of 5-year TIPS, we find that the TIPS total
return
is significant for explaining the one-month ahead inflation rate, even in the presence of
,
, and the other control variables. A similar statement holds for all of the regressions in Panel A of Table 8. However, we find that
is not significant in Panel B of Tables 6 and 8, nor is it significant in Panels A or B in Table 7, which uses only on-the-run TIPS. Furthermore,
is not significant in any of our other regressions, such as those using 10-year TIPS or the combined sample of 5-year and 10-year TIPS. Thus it appears that the informational
content of TIPS is coming mostly from the embedded option return and not from the TIPS total return.
Overall, Tables 8-9 indicate that our earlier results are robust to different weighting schemes. The only exception to this statement occurs in column 8 of Panel A in Tables 8-9, where we use the option return index that favors out-of-the-money options, i.e.,
. We find that this variable is not significant for explaining the one-month ahead inflation rate in the presence of our control variables. Note that
favors out-of-the-money options, which are the least sensitive options to movements in inflation. Thus it is perhaps not too surprising that
is insignificant. Out of all of our alternative weighting schemes, this is the one that we would have guessed to be least informative. However, this is not to say that
does not contain useful information about future inflation. In panel B of both Tables 8 and 9, we find that
is significant for explaining the one-year ahead inflation rate. Thus even though our control variables drive out of the significance of
at the one-month horizon, it remains an important variable at the one-year horizon.
In the previous sections, we used (8)-(9) to construct
and
, where the individual embedded option values were obtained from our TIPS pricing model that uses ( 1)-(2). In this section, we explore an alternative
explanatory variable that is less sensitive to model specification. We use the embedded option returns in each month to compute a new variable,
, which we define as the fraction of
options in month
with a positive return. To calculate
, we divide the
number of embedded options with a positive return in month
by the total number of embedded options in month
. Using
instead of
allows us to
investigate the robustness of our modeling assumptions. Any other model that produces positive (negative) embedded option returns when our model produces positive (negative) embedded option returns will give the same time series for
and thus the same regression results.
Table 10 shows our regressions results when
is used in place of
. The first two columns of Table 10 use the combined sample of 5-year and 10-year TIPS, while the last two columns use the subsample that includes only on-the-run TIPS. In both Panels A and B of Table 10, we see that
is statistically significant, although the level of significance is reduced in some cases relative to Tables 6 and 7. In Panel A of Table 10, we see that lagged inflation and the VIX return are significant variables for explaining the
one-month ahead inflation rate, which is also true in Panel A of Tables 6 and 7. Likewise, in Panel B of Table 10, we see that none of the control variables are significant for explaining the one-year ahead inflation rate, which mirrors our results in Panel B of Tables 6 and 7.
The regressions in Table 10 show that our modeling assumptions in (1)-(2) are not critical to our results. If we were to alter (1)-(2) in such a way that the sign of each
option return did not change, we would get the same variable
and thus the same results in Table 10. Tables 6 and 7 show that the embedded option return index is informationally
relevant for explaining the one-month ahead and the one-year ahead inflation rate. When we ignore the magnitude of the option returns and focus only on the sign of those returns, we get an explanatory variable (namely,
) that is also informationally relevant. However, if we compare the adjusted-
values in Table 10 to those in Tables 6 and 7, we see that the values
in Table 10 are smaller. But this is exactly what we would expect to find given that
captures only the sign of the option returns and not the magnitude. Overall, Table 10 shows that
our results are robust to model specification.
In this section we examine the ability of
to explain the future rate of inflation in the presence of an additional control variable, the
return on crude oil
. The price of crude oil is impacted by many factors, such as pricing policies in the OPEC cartel, supply disruptions due to weather or political instability, and
speculative demand. The relationship between inflation and the price of crude oil is not necessarily stable over time, a point of view that is supported by Bekaert and Wang (2010) and Hamilton (2009). Because of this, we treat crude oil separately so as to better gauge the marginal impact of
including the crude oil return as a control variable in our regressions.
Our results with crude oil are shown in Table 11, where we analyze both the one-month ahead inflation rate (Panel A) and the one-year ahead inflation rate (Panel B) using the 5-year sample of TIPS, the 10-year sample of TIPS, and the 5-year and 10-year combined sample of TIPS. In both panels, we
see that the crude oil return is statistically significant for all three samples. To see the marginal impact of
, we compare Table 11 to Table 6. For the 5-year sample of TIPS, the addition of
drives out the significance of
in both Panels A
and B. It also reduces the significance of the VIX return and lagged inflation, as compared to Panel A in Table 6. For the 10-year sample of TIPS and for the combined sample of 5-year and 10-year TIPS, the addition of
reduces, but does not drive out, the significance of
. This is true in both Panels A and B of Table 11. In the last two columns of Panel B, only the oil return and the embedded option return are statistically significant for explaining the one-year ahead inflation rate.
Overall, our results in Table 11 are mixed since
is not significant in the presence of
for 5-year TIPS, but it is significant in the presence of
for the other two samples. In spite of this, the results in Table 11 are consistent with our earlier results in Tables 6 and 7. In those two tables,
is less significant when it is constructed with only 5-year TIPS, as compared to 10-year TIPS or the combined sample of 5-year and 10-year TIPS. We attribute this to the smaller sample size of
5-year TIPS relative to 10-year TIPS, as shown in Table 1. Since
is significant in the last two columns of Table 11, the embedded option in TIPS contains useful information for
explaining the future inflation rate, even in the presence of
.
In Section 4.6, we showed that
is significant for explaining the one-month ahead and the one-year ahead
inflation rate. Since our estimation results in Table 2 use data for the entire sample period 1997-2010, our embedded option index variables in (8)-(9) rely on parameter estimates that have a forward looking bias. Thus our results in Section
4.6 should not be interpreted as inflation forecasts - they are simply in-sample results. We now address this issue by using a rolling window approach. We use all of the securities in Table 1 and we re-estimate our model using rolling subsamples. Using the parameter
estimates for each subsample, we calculate the embedded option values and the embedded option returns. We then use the option values and the option returns to explain the future inflation rate, which is a true out-of-sample analysis.
More specifically, our full sample period is January 1997 through May 2010, which is 161 months. We use a 48-month rolling window, which allows us to construct 114 subsamples. The first subsample spans January 1997 through December 2000, the second subsample spans February 1997 through January
2001, and so forth. For each subsample, we seek a solution to the optimization problem in (7). We then use the embedded option values from the last month and from the next to the last month of each subsample to calculate
and
according to (8)-(9). In the subsample that spans January 1997 -
December 2000, we use the embedded option values from November-December 2000 to calculate
and
for December 2000; in the subsample that spans February 1997 - January 2001, we use the embedded option values from December 2000 and January 2001 to calculate
and
for January 2001; and so forth. This gives us a new time series for
and a new time series for
that do not suffer from forward looking bias.
Table 12 shows the regression results for our out-of-sample approach. Panel A shows our regressions for the one-month ahead out-of-sample inflation rate, while Panel B shows our regressions for the one-year ahead out-of-sample inflation rate. In Panel A of Table 12,
is statistically significant at the 1% level, even in the presence of the control variables. As we saw in the last column of Panel A in Table 6, the VIX return and lagged inflation are also
significant, but unlike Table 6 the yield spread is insignificant. D'Amico, Kim, and Wei (2010) show that the yield spread is a useful measure of inflation expectations, but only after controlling for liquidity in the TIPS market. We do not directly control for TIPS liquidity, but our out-of-sample
analysis focuses on the latter portion of our sample period, where TIPS liquidity is less of a concern relative to the initial years of TIPS trading. In Panel B of Table 12, in the second column where we include the control variables, we find that the only significant variables are
(significant at the 10% level) and
(significant at the 1% level). Although
is more significant statistically than
, it is less significant
economically. We can see this from the regression coefficients in Panel B and from the summary statistics in Table 4, where the mean and standard deviation of
are small relative to the
values for
. Lastly, upon examining the adjusted-
values, we see that
,
, and the control variables in Panel A (Panel B) explain 35.3% (
11.7%) of the variation in the one-month (one-year) ahead out-of-sample inflation rate. For Panel A (Panel B), these numbers are about the same as (better than) the corresponding values in Table 6.
We also use
as an explanatory variable in Table 12. Recall from Section 4.7.2 that
is robust to model specification since any other pricing model that produces the same signs for the embedded option returns will produce the same variable
. Our results with
are shown in the last two columns in Table 12. In Panel A, we find that
alone is significant at the 1% level, but the significance is driven out by the control variables. Thus it appears that the magnitude of the option returns, and not just the sign of those returns, is important for explaining the one-month ahead
out-of-sample inflation rate. In Panel B, we find that
alone is significant at the 5% level, and
remains significant at the 10% level when the control variables are included. This suggests the sign of the option return contains useful information for forecasting the one-year ahead out-of-sample inflation rate. This is similar to our earlier in-sample
results in Panel B of Table 10.
If we compare the out-of-sample results in Table 12 to the corresponding in-sample results in Tables 6 and 10, we see that the out-of-sample results are slightly weaker than the in-sample results. There are at least two contributing reasons. First, our rolling subsample is only 48 months long, which is much shorter than our full sample of 161 months. Thus our parameter estimates and our embedded option estimates are noisier in the subsamples, which makes for noisier embedded option explanatory variables. Second, the short length of our window decreases not only the time length of each subsample, but it can also decrease the number of securities that is included in each subsample. For example, in our early subsamples, the number of TIPS and matching nominal Treasuries is reduced since some of these securities have not yet been auctioned. The smaller number of securities implies that there are fewer observations within the subsample for estimating our model parameters, which again will lead to noisier parameter estimates. In spite of these issues, our results in Table 12 suggest that even out-of-sample, the embedded option in TIPS contains information that is useful for explaining future inflation.
Our paper uncovers the informational content of the embedded deflation option in TIPS. We value the option explicitly and we show that the embedded option return contains relevant information for explaining the one-month ahead and the one-year ahead inflation rate, even in the presence of standard inflation variables. In almost all of our regressions, including our robustness checks, the embedded option return index is statistically and economically important. We argue that the embedded option return should not be ignored. In fact, our results suggest that the time variation in the embedded option return is a valuable tool for anyone who is interested in assessing inflationary expectations.
Our paper contains several new findings. First, we conclude that the embedded option return index is a significant variable for explaining the one-month ahead and the one-year ahead inflation rate, both in-sample and out-of-sample. Using 5-year (10-year) TIPS, our results suggest that a
embedded option return, which is less than one standard deviation, is consistent with a 110 basis point (52 basis point) decrease in the one-month ahead annualized rate of inflation.
For most of our regressions, the traditional inflation variables such as the yield spread and the return on gold are insignificant in the presence of our embedded option return index. However, the lagged inflation rate and the return on the VIX index continue to be important variables. Presumably,
these variables capture additional uncertainty beyond what is contained in the embedded option return. Second, our main conclusions are not altered when we discard off-the-run TIPS, when we use alternative weighting schemes, when we add an additional control variable (the crude oil return), or when
we use our variable
, which is less sensitive to model specification. Third, we present evidence to show that our results continue to hold out-of-sample (Table 12). Lastly, we analyze
5-year TIPS, 10-year TIPS, and the combined sample of 5-year and 10-year TIPS. Although our results are somewhat weaker for 5-year TIPS, perhaps due to the smaller sample size, we find that the evidence from 5-year TIPS is not enough to alter our main conclusions. In summary, our paper shows that
the embedded deflation option in TIPS is informationally relevant for explaining future inflation, both in-sample and out-of-sample, out to a horizon of twelve months.
There are several areas for future research. First, our TIPS pricing model is a traditional asset pricing model in the sense that we do not directly model liquidity. In fact, this is one of the reasons that we discard the off-the-run TIPS and we explore how our regressions perform using only
on-the-run TIPS (see Table 7). A more complicated approach would be to derive a TIPS pricing model that accommodates liquidity directly. This type of pricing model could be estimated using both on-the-run and off-the-run TIPS, with the understanding that liquidity is captured by the model itself.
Second, although we conduct robustness checks using our variable
, which is significant in Tables 10 and 12, we do not claim that our model in ( 1)-(2) is the best way to price a TIPS. Our motivation for using (1)-(2 ) is twofold - the model is parsimonious and we can solve the model in closed-form. Thus one avenue for future
research is to explore other pricing models and perhaps run a horse race between them to find the best pricing model. In the context of our paper, the best pricing model would be the one that provides the most information for forecasting future inflation. Lastly, we have shown that
and
are informationally relevant variables for explaining the inflation rate.
However, we do not examine higher-order moments of these variables, nor do we examine how the inflation probability density evolves over time. This latter topic is complicated since we estimate our model under the risk-adjusted probabilities. We leave these areas as ideas for future research.
Appendix
We stack the nominal interest rate
and the inflation rate
into a vector
, where
denotes the transpose. Thus we can rewrite
(1)-(2 ) as
where
,
, and
and
are the matrices
Since
is not a diagonal matrix, (17) is a coupled system of equations. Changes in
depend on both
and
, while changes in
depend on both
and
. Instead of working with
directly, we work with a decoupled system that is related to (17). Define
as
where
and
are
The constants
and
are the eigenvalues of
, while the columns of
are the associated eigenvectors. It is easily verified that
, where
is the diagonal matrix
We now define a new set of variables
, where
. Also define
and
, where
and where
Using Itô's lemma, the process for
is
which is an uncoupled system since
is diagonal. We solve (3) using the variables
and
. We then recover the TIPS price in terms of
and
by noting that
, i.e.,
To get the moments for
and
, we solve (18)
to get
for
. Taking expectations of (20)-(21) gives
To get the variance of
, note that
A similar calculation gives
To get the covariance between
and
, note that
Given (18),
and
are bivariate normal with
conditional moments (22)-(23), (24)-(25), and (26). To evaluate the TIPS price, we need to know the joint distribution of
and
for
. Using (19), note that
Thus to get the joint distribution of
and
, it is sufficient to characterize the joint distribution of
and
. Since
and
are jointly normal,
and
are also jointly normal. This follows since the sum of normally distributed random variables is also normally distributed. Thus we only need to characterize the first
two moments of
and
.
Suppose
and recall that
. We focus on the case of time
, but our results apply for any
in the upper limit of integration. Using (20)-(21), we have
and
To get the variance of
note that
The last line of (31) includes two terms. The first term is
We need to calculate
which is
Substituting (33) into the right-hand side of (32), we get
which is easy to evaluate. The second term in the last line of ( 31) is
The right hand side of the above expression has three terms, but only the first term on the right hand side has non-zero correlation with
. Thus
which can be evaluated using (24). Combining (34) and (35) gives the result
A similar calculation gives
To get the covariance between
and
, note that
Like equation (31), there are two terms in (36) that must be evaluated. The first term is
Since
we have,
and thus the right-hand side of (37) is easy to evaluate. The second term in (36) is
The right hand side of the above expression has three terms, but only the first term on the right hand side has non-zero correlation with
. Thus (38)
is
which can be evaluated using (26). Combining (37) and (39) gives the result
We now return to (3) to evaluate the TIPS price. The first term in (3) is
Note that
where
is
In (40), we have used the property that for any normally distributed random variable
,
. The second term in (3) is
where
is given in (41). The third term in (3 ) is
where
is the indicator function for the event in curly brackets. Equation (42) involves two expectations, where each expectation is of the
form
where
and
are bivariate normal random variables and
is a constant. The joint distribution of
and
is characterized by
,
,
,
, and
. A direct calculation reveals that (43) is equal to
where
is the standard normal cumulative distribution function. To analyze the first expectation in (42 ), we use (44)
and we let
To analyze the second expectation in (42), we use ( 44) and we let
where
and
are given by (46) and (47),
respectively. Thus (42) depends on
,
,
,
, and
, which are given above. This completes the derivation of the TIPS price in (3).
We now derive the price of a nominal Treasury Note. Using equation (19), the first term in (5) can be written as
Note that
where
is
Like equation (41), (48) uses the property that for any normally distributed random variable
,
. Similarly, the second term in (5) is
where the function
is obtained by substituting
for
in (48). This completes the derivation of the nominal Treasury Note price in (5).
In this section we show how to derive the long run means and the speeds of mean reversion for
and
. We can rewrite (17) as
, where we define
and
. Upon substituting we get
, which is a more traditional form. The long run means are
Our empirical estimates for (49)-(50) are shown in Table 2. | http://www.federalreserve.gov/pubs/feds/2013/201324/index.html | CC-MAIN-2014-41 | refinedweb | 13,172 | 59.64 |
Opened 7 years ago
Last modified 9 months ago
#18392 new New feature
Make MySQL backend default to utf8mb4 encoding (46)
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
I don't know enough about MySQL and the ORM to answer your questions, I hope someone else does.
comment:3 Changed:
"InnoDB has a maximum index length of 767 bytes, so for utf8 or utf8mb4 columns, you can index a maximum of 255 or 191 characters, respectively. If you currently have utf8 columns with indexes longer than 191 characters, you will need to index a smaller number of characters. In an InnoDB table, these column and index definitions are legal: col1 VARCHAR(500) CHARACTER SET utf8, INDEX (col1(255)) To use utf8mb4 instead, the index must be smaller: col1 VARCHAR(500) CHARACTER SET utf8mb4, INDEX (col1(191))"
From:
comment:4 Changed 7 years ago by?
comment:9 Changed 7 years ago by
As a workaround, you can make python understand 'utf8mb4' as an alias for 'utf8':
import codecs codecs.register(lambda name: codecs.lookup('utf8') if name == 'utf8mb4' else None)
comment:11 Changed 7 years ago by
comment:12 Changed 7 years ago by
The fix mentioned above has been merged to master () and released ()
comment:14 Changed 6 years ago by
comment:15 Changed 6 years ago by
comment:16 Changed 6 years ago by
comment:18 Changed 6 years ago by
comment:19 follow-up: 20 Changed 5 years ago by
comment:23 Changed 4 years ago by
One solution would be to reduce the INDEX size to 191 for mysql, like the example above:
col1 VARCHAR(500) CHARACTER SET utf8mb4, INDEX (col1(191))"
comment:24 follow-up: 25 Changed 4 4 years 4 years.)
comment:27 Changed 4 years ago by
Will that setting work nicely with migrations though? I think we need to know the index names for some operations like
AlterField. It seems problematic if we have a way that users can vary the index names without updating existing names.
comment:28 Changed 4 years ago by
I was thinking it wouldn't actually change the name of the index, but I haven't actually looked at the code. :)
comment:29 Changed 4 years 4 years ago by
I'm not thinking of limiting the _name_ of the index. The issue is "the maximum number of characters that can be indexed".
comment:31 Changed 4 years ago by
Thanks Collin, in that case your proposal makes more sense to me. It could be nice to get a consensus from more MySQL users though.
comment:32 Changed 3 years ago by
comment:33 Changed 3 years ago by
comment:34 Changed 3 years ago by
comment:35 Changed 3 years 3 years 3 years ago by
Yes, I based my proposal off of what WordPress did. WordPress limited the length of the index without limiting the length of the field itself. Django currently doesn't have that option.
comment:38 Changed 3 years ago by
comment:39 Changed 3 years ago by
comment:40 Changed 3 years?
comment:41 Changed 23 months ago by
I suggest to begin with a very minimal patch like this PR, which will at least allow users to begin converting some database columns to
utf8mb4 through custom migrations, and use these columns in their code (where indexing doesn't come in their way).
Working on index issues can come later, and will be needed to run the Django test suite with
utf8mb4.
comment:42 Changed 23 months ago by
Oh, now I realize that
utf8mb4 can also be set in DATABASES OPTIONS. Still, using it in Django by default is a strong signal.
comment:43 Changed 22 months ago by
I provided a more comprehensive patch.
comment:44 Changed 22 months ago by
There's an outstanding issue to fix on the pull request and Claude said, "I'm not sure if I'll have time to continue working on this, so if anyone wants to take this patch further, feel free!"
comment:45 Changed 13 months ago by
As a workaround, I came up with this monkey patch that limits the index size. We put this in our migrations/init.py:
from django.db.models.fields import CharField def _create_index_sql(self, model, fields, suffix="", sql=None): """ Return the SQL statement to create the index for one or several fields. `sql` can be specified if the syntax differs from the standard (GIS indexes, ...). """ tablespace_sql = self._get_index_tablespace_sql(model, fields) idx_columns = [] for field in fields: c = field.column if isinstance(field, CharField): if field.max_length > 255: idx_columns.append(self.quote_name(c) + '(255)') else: idx_columns.append(self.quote_name(c)) else: idx_columns.append(self.quote_name(c)) columns = [field.column for field in fields] sql_create_index = sql or self.sql_create_index return sql_create_index % { "table": self.quote_name(model._meta.db_table), "name": self.quote_name(self._create_index_name(model, columns, suffix=suffix)), "using": "", "columns": ", ".join(column for column in idx_columns), "extra": tablespace_sql, } from django.db.backends.mysql.schema import DatabaseSchemaEditor DatabaseSchemaEditor._create_index_sql = _create_index_sql
comment:46 Changed 9 months ago by
Can anyone clarify the process to migrate the default Django generated MySQL schema to a utf8mb4 friendly one?. | https://code.djangoproject.com/ticket/18392?cversion=1&cnum_hist=26 | CC-MAIN-2019-26 | refinedweb | 862 | 61.77 |
The conditional operator, written
?:.
The general form looks like this:
expression1 ? expression2 : expression3
If expression1 is true, then the value of the whole conditional expression is the value of expression2.
Otherwise, the value of the whole expression is the value of expression3.
The following code uses the conditional operator to determine the larger of two values.
#include <iostream> using namespace std; /*from ww w . ja va 2 s . c o m*/ int main() { int a, b; cout << "Enter two integers: "; cin >> a >> b; cout << "The larger of " << a << " and " << b; int c = a > b ? a : b; // c = a if a > b, else c = b cout << " is " << c << endl; return 0; }
The code above generates the following result. | http://www.java2s.com/Tutorials/C/Cpp_Tutorial/0180__Cpp_Conditional_Operator.htm | CC-MAIN-2017-09 | refinedweb | 117 | 66.74 |
In this tutorial, we'll make a shooter movieclip (the red snorkel thing above) that can be dragged horizontally, a dart movieclip that is 'shot' when the mouse is released on the shooter, and several instances of a bubble movieclip that float by and pop when hit by the dart. Because the x and y positions of the shooter, dart, and bubbles will all need to be controlled with actionscript, we'll make each one with an upper left registration point. This ensures that values reported in the Properties panel (which returns the distance between the movieclip container's registration point and the upper left corner of the movieclip, regardless of its registration point) match the _x and _y values of the movieclip (which is always the distance from the movieclip container's registration point to the movieclip's own registration point).
The popping bubble does not lend itself to being programmed in actionscript, so we'll set that up as a movieclip of several frames duration, where the first frame holds the bubble in its normal state and has a stop action in it, and frame 2 is labelled "explode" and starts the tweened pop, in which the bubble is shape tweened to expand and fade away (with a stop action in the final frame).
The stage is set up with 5 copies of the bubble (instance named bubble1, bubble2,... bubble5) staggered off the left side of the stage, a layer with the shooter in it (instance name shooter) and a layer with the dart (instance name dart) below that. Both the shooter and the dart are under a mask which is not at all essential to the game, but I added to make it look like the shooter was moving along a wavy path partly underwater.
We'll use the standard startDrag/stopDrag commands to make the shooter draggable, but we also need to make the dart 'draggable', that is, follow the shooter. We can't drag two things at once, so instead we'll use the onMouseMove event handler within the shooter's onPress handler so that we can control what happens when the mouse is both pressed down on the shooter and being moved. Here is the code to drag shooter and have dart follow underneath:
var STAGEWIDTH:Number = 500; shooter.onPress = function() { this.startDrag(false, 0, this._y, STAGEWIDTH-this._width, this._y); this.onMouseMove = function() { dart._x = this._x + 4; } } shooter.onRelease = shooter.onReleaseOutside = function() { this.stopDrag(); delete this.onMouseMove; }
Notice that we've constrained the shooter to a horizontal drag by setting T and B to the same value (this._y, which is its current y location on stage), as described on the drag and hit page. We also used a variable STAGEWIDTH (capitalized to show that it's actually a constant which will not change throughout the movie) to contain the value of the movie's width and used that as the R value for the constraint.
The dart is 'dragged' along with the shooter by setting its x position 4 pixels to the right of the shooter's (as it is when the movie starts) whenever the shooter is dragged (ie, whenever the mouse moves while pressed over shooter). Notice that we delete the onMouseMove handler when the mouse is released since it would take unnecessary processing to keep checking it then.
In this game, the dart shoots out whenever the mouse is released after dragging (or just pressing and not dragging) the shooter. So the code to do the shooting needs to be in the onRelease/onReleaseOutside handler. This is what that handler looks like, revised to include shooting:
var DARTSPEED:Number = 20; var DARTSTART:Number = dart._y; // save dart start position shooter.onRelease = shooter.onReleaseOutside = function() { this.stopDrag(); delete this.onMouseMove; dart.onEnterFrame = function() { // if the dart is still onstage, move it upwards a defined number of pixels per frame if (this._y > 0-dart._height) { this._y -= DARTSPEED; // if it has gone off the top of the stage, reset it to its original position // and stop the enterFrame to wait for the next release of shooter } else { this._y = DARTSTART; delete this.onEnterFrame; } } }
Again, we've set up some constants at the start of the movie to store information we'll need later (and may want to have easy access to, to change as we develop the game): the number of pixels to move the dart each frame, and the dart's starting y position, so we can reset it after it goes offstage. The code to shoot the dart is inside an onEnterFrame loop because it needs to happen continuously (until the dart goes offstage, ie, when this._y > 0-dart._height). While it's onstage, it's continuously bumped upwards DARTSPEED number of pixels every frame. When it goes off, it's reset to its start position and the enterFrame is killed (to keep the dart from shooting all over again).
To make one of the bubbles, eg, bubble1, move across the stage and then put it back at in a position that's just to the left of the stage when it goes off the right side, we would use this code:
bubble1.onEnterFrame = function() { if (this._x < STAGEWIDTH) { this._x += BUBBLESPEED; } else { this._x = 0-this._width; } }
To assign the same handler function to all five bubbles, we can do
bubble1.onEnterFrame = bubble2.onEnterFrame = bubble3.onEnterFrame = bubble4.onEnterFrame = bubble5.onEnterFrame = function() { if (this._x < STAGEWIDTH) { this._x += BUBBLESPEED; } else { this._x = 0-this._width; } }
Now we have code to drag the shooter, fire the dart, and make all five bubbles move across the stage and reset themselves at a position just off the left side of the stage when they go off the right. We still need to react to a collision between the bubbles and the dart, and to make the bubbles appear a little more randomly each time. To do the latter, we can apply the Math.random() function to change the _x, _y, _xscale and _yscale of the bubbles randomly whenever they are reset. Math.random() returns a random decimal amount between 0 and 1, which means that Math.random()*50, for example, will return a random (decimal) number between 0 and 50. More about Math.random and other methods of the Math class can be found on the Math class page. For now, it is enough for us to know that Math.random()*n will return a random number between 0 and n, and, by extension (come on, turn on your math brains, you can do this!) -n + Math.random()*2*n will return a random number between -n and +n. This will allow us to either shrink or grow the balloon by up to 10%, for example, by adding -10 + Math.random()*20 to the current scale values. This is the code to randomly set the balloon's x, y and size:
bubble1.onEnterFrame = ... bubble5.onEnterFrame = function() { if (this._x < STAGEWIDTH) { this._x += BUBBLESPEED; } else { // reposition on left, between 0 and 50 pixels to the left of the stage this._x = 0-this._width - Math.random()*50; // randomly move up or down, up to 20 pixels this._y += -20 + Math.random()*40; // randomly change size +/- up to 10% this._xscale = this._yscale = -10 + this._xscale + Math.random()*20; } }
(Note that in order to keep the random values from going wildly out of range, a few if/then checks should be added to make sure the bubbles stay on stage, eg, and that they don't eventually become enormous or invisibly small. I didn't want to add that extra level of complication to this example though, so I picked some not-too-large variations on the original values.)
We have three choices for attaching a handler to check for balloon-dart collisions: to the main timeline, to the dart, or to the balloon instances. If we choose either of the first two, we'd need to set up a loop to check the dart against each of the five balloon instances. To avoid this unnecessary complication, we'll go with the third option, attaching the handler to the balloon instances. That way, each balloon can check to see whether it has collided with the dart and respond accordingly, without any code loops being needed. We already have code in the onEnterFrame handler for each balloon, so we'll just add the collision detection code to that:
// define the bubbles' behavior bubble1.onEnterFrame = ... bubble5.onEnterFrame = function() { // while bubble is still onstage: if (this._x < STAGEWIDTH) { this._x += BUBBLESPEED; // easier to check here against dart rather than vice versa // MUST check if in frame 1 or command may be executed repeatedly if (this.hitTest(dart) && this._currentframe==1) { this.gotoAndPlay("explode"); } // when bubble goes offstage: } else { // same code as above to reset, plus // set back to normal (unexploded) view this.gotoAndStop(1); } }
In that code this.hitTest(dart) is used to check whether a collision has occurred, and if so (and if the balloon is currently in frame 1, meaning not already exploded), make it explode. _currentframe is a read-only property of the MovieClip class which shows what frame the playhead of that movieclip is currently in. In the code above, we also issue a command to send the balloon back to frame 1 to reset it graphically for its next trip across the stage.
All of the code above can then be combined and put in its own .as file (for easier editing in the Flash IDE) and then included in the fla with this command:
#include "dartshooter.as"
You can download a zip of the fla and as file to examine here.
Discussed on this page:
downloadable shoot bubbles example game, onenterframe, collision detection, random placement (Math.random), shape tween, drag multiple objects
Files:
dartshooter.fla
dartshooter.as
(free download)
A list of all files currently available at the site may be viewed here. | http://www.flash-creations.com/notes/actionscript_dartshooter.php | crawl-001 | refinedweb | 1,656 | 69.92 |
Today, we’ll talk about Unity Coding Standards. We’ll cover things to do, things to avoid, and general tips to keep your projects clean, maintainable, and standardized.
Things to avoid
I want to prefix this by saying that the thoughts in this post are guidelines and not meant to be a criticism of anyone. These are personal preferences and things I’ve picked up from experience across a variety of different projects.
If you find that you commonly do and use some of these things, don’t be offended, just try to be conscious of the issues that can arise. With that little disclaimer, here are some of the key things I always fight to avoid and recommend you do your best to limit.
Public Fields
I won’t go deep into this as I think I’ve already covered it here. Just know that public fields are generally a bad idea. They often tend to be a precursor to code that’s difficult to read and maintain.
If you need to access something publicly, make it a property with a public getter. If you really need to set it from another class, make the setter public too, otherwise use a property that looks like this:
public string MyStringThatNeedsPublicReading { get; private set; }
Large Classes
I’ve seen far too many Unity projects with class sizes that are out of control. Now I want to clarify that this is not something only specific to unity, I’ve seen classes over 40k lines long in some AAA game projects. I’ve seen .cs & .js files in web apps over 20k lines long.
That of course does not make them right or acceptable.
Large classes are hard to maintain, hard to read, and a nightmare to improve or extend. They also always violate one of the most important principals in Object Oriented Programming. The principal of Single Responsibility.
As a general rule I try to keep an average class under 100 lines long. Some need to be a bit longer, there are always exceptions to the rules. Once they start approaching 300 lines though, it’s generally time to refactor. That may at first seem a bit crazy, but it’s a whole lot easier to clean up your classes when they’re 300 lines long than when they reach 1000 or more. So if you hit this point, start thinking about what your class is doing.
Is it handling character movement? Is it also handling audio? Is it dealing with collisions or physics?
Can you split these things into smaller components? If so, you should do it right away, while it’s easy.
Large Methods
Large classes are bad. Large methods are the kiss of death.
A simple rule of thumb: if your method can’t fit on your screen, it’s too long. An ideal method length for me is 6-10 lines. In that size it’s generally doing one thing. If the method grows far beyond that, it’s probably doing too much.
Some times, as in the example below, that one thing is executing other methods that complete the one bigger thing. Make use of the Extract Method refactoring, if your method grows too long, extract the parts that are doing different things into separate methods.
Example
Take this Fire() method for example. Without following any standards, it could easily have grown to this:
Original
protected virtual void Fire() { if (_animation != null && _animation.GetClip("Fire") != null) _animation.Play("Fire"); var muzzlePoint = NextMuzzlePoint(); if (_muzzleFlashes.Length > 0) { var muzzleFlash = _muzzleFlashes[UnityEngine.Random.Range(0, _muzzleFlashes.Length)]; if (_muzzleFlashOverridePoint != null) muzzlePoint = _muzzleFlashOverridePoint; GameObject spawnedFlash = Instantiate(muzzleFlash, muzzlePoint.position, muzzlePoint.rotation) as GameObject; } if (_fireAudioSource != null) _fireAudioSource.Play(); StartCoroutine(EjectShell(0f)); if (OnFired != null) OnFired(); if (OnReady != null) OnReady(); var clip = _animation.GetClip("Ready"); if (clip != null) { _animation.Play("Ready"); _isReady = false; StartCoroutine(BecomeReadyAfterSeconds(clip.length)); } _currentAmmoInClip--; if (OnAmmoChanged != null) OnAmmoChanged(_currentAmmoInClip, _currentAmmoNotInClip); RaycastHit hitInfo; Ray ray = new Ray(muzzlePoint.position, muzzlePoint.forward); Debug.DrawRay(muzzlePoint.position, muzzlePoint.forward); if (TryHitCharacterHeads(ray)) return; if (TryHitCharacterBodies(ray)) return; if (OnMiss != null) OnMiss(); if (_bulletPrefab != null) { if (_muzzleFlashOverridePoint != null) muzzlePoint = _muzzleFlashOverridePoint; Instantiate(_bulletPrefab, muzzlePoint.position, muzzlePoint.rotation); } }
This method is handling firing of weapons for an actual game. If you read over it, you’ll see it’s doing a large # of things to make weapon firing work. You’ll also notice that it’s not the easiest thing to follow along. As far as long methods go, this one is far from the worst, but I didn’t want to go overboard with the example.
Even so, it can be vastly improved with a few simple refactorings. By pulling out the key components into separate methods, and naming those methods well, we can make the Fire() functionality a whole lot easier to read and maintain.
Refactored
protected virtual void Fire() { PlayAnimation(); var muzzlePoint = NextMuzzlePoint(); SpawnMuzzleFlash(muzzlePoint); PlayFireAudioClip(); StartCoroutine(EjectShell(0f)); if (OnFired != null) OnFired(); HandleWeaponReady(); RemoveAmmo(); if (TryHitCharacters(muzzlePoint)) return; if (OnMiss != null) OnMiss(); LaunchBulletAndTrail(); }
With the refactored example, a new programmer just looking at the code should be able to quickly determine what’s going on. Each part calls a method named for what it does, and each of those methods is under 5 lines long, so it’s easy to tell how they work. Given the choice between the 2 examples, I’d recommend #2 every time, and I hope you’d agree.
Casing
The last thing I want to cover in this post is casing. I’ve noticed in many projects I come across, casing is a mess. Occasionally, project I see have some kind of standard they’ve picked and stuck to. Much of the time though, it’s all over the place with no consistency.
The most important part here is to be consistent. If you go with some non-standard casing selection, at least be consistent with your non-standard choice.
What I’m going to recommend here though is a typical set of C# standards that you’ll see across most professional projects in gaming, business, and web development.
Classes
Casing: Pascal Case
public class MyClass : MonoBehaviour { }
Methods
Casing: Pascal Case (No Underscores unless it’s a Unit Test)
private void HandleWeaponReady()
Private Fields
Casing: camelCase – with optional underscore prefix
// Either private int maxAmmo; // OR my prefered private int _maxAmmo;
This is one of the few areas where I feel some flexibility. There are differing camps on the exact naming convention to be used here.
Personally, I prefer the underscore since it provides an obvious distinction between class level fields and variables defined in the scope of a method.
Either is completely acceptable though. But when you pick one for a project, stick with it.
Public Fields
It’s a trick, there shouldn’t be any! 😉
Public Properties
Casing: Pascal Case
public int ReaminingAmmoInClip { get; private set; }
These should also be Automatic Properties whenever possible. There’s no need for a backing field like some other languages use.
Again you should also mark the setter as private unless there’s a really good reason to set them outside the class.
Wrap Up
Again, this is just a short list of a few things that I think are really important and beneficial for your projects. If you find this info useful, drop in a comment and I’ll work to expand out the list. If you have your own recommendations and guidelines, add those as well so everyone can learn and grow.
Thanks, and happy coding! | https://unity3d.college/2016/05/16/unity-coding-standards/ | CC-MAIN-2020-34 | refinedweb | 1,251 | 65.83 |
Notes on Managed Debugging, ICorDebug, and random .NET stuff
We need some customer feedback to determine if we fix a regression that was added in VS2008.
Any language can target the CLR by compiling the language to IL, and then you immediately leverage the .NET platform, including access to the libraries and debugging tools.
Do you write a compiler that takes an XML source file in and then compiles it to IL, produces a managed PDB, and then expect to be able to debug the XML source file using the source-line information you put in the PDB? For example, if MSBuild compiled to IL (instead of being interpreted), it would fall under this category.
Compilation techniques could mean:
What's regressed?
In VS2005, you can set breakpoints on source lines in the XML file (that map to the ranges specified in the PDB you emitted alongside the IL) and hit them. You can also do set-next-statement and do stepping.. The data-breakpoint is designed to cooperate with the XML libraries, but not the managed PDBs. Thus the code breakpoint is not hit and you won't stop in the xml file. Does this impact you?
Example
Here's a very simple way to see the impact of this using case #2 above:
using System;
using System.Collections.Generic;
using System.Text;
namespace xml_debug
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hi!");
#line 4 "xmlfile1.xml"
Console.WriteLine("ABC!");
Console.WriteLine("DEF!");
Console.WriteLine("GHI!");
#line default
Console.WriteLine("Hi!");
}
}
}
File "xmlFile1.xml":
<?xml version="1.0" encoding="utf-8" ?>
<doc>
<test>
abc <-- line 4
def
ghi
</test>
<test>
other
</test>
</doc>
The #line directive in the C# file would cause the next lines (up to #line default) to be associated with lines in the XML file, thus having the PDB associate the xml file with the IL. You can try this out in both VS2005 and VS2008 as a default C# console application to get a feel for the differences and extent of the issue.
Did I understand it right? This is only a problem with XML files? Other files will work with #line? If it is so, this is not a problem for me...
Right, just files that have XML contents (not necessarily the xml extension)
."
This is obviously a failure, with VS trying to be too smart about user's intentions. It would be vital to add some option for switching this feature off and restore flawless xml debugging experience thus.
Ondrej - yes, it's definitely a failure; hence why I'm blogging about it.
Does it impact you? Do you ever compile XML to IL?
Just to be clear, what is it that triggers this? Is it the XML header in the file? Or is it the XML editor itself? If I opened the XML file in the text editor and set a breakpoint, would it work?
I am soon to be affected by this. I have a templating system that takes XML as input and spits out domain objects in C# as output (using NVelocity as engine). I would have liked to map the property declarations in the C# back to the XML...
Doesn't XslCompiledTransform compile XML to IL ? Is it affected by this issue ?
Cheers,
--Jonathan
"We need some customer feedback to determine if we fix a regression that was added in VS2008."
I'm not sure I understand this opening line. Are you implying there is a chance it won't be fixed if you don't get enough feedback here? Of course you should fix it. :) Not only is it a bug but a regression bug.
To answer your other question: Do I compile Xml? No I don't today but thanks for the warning. Its definitely something to be aware of should I want to do this in the future as I have an increasing amount of C# and Xml to maintain.
Kent - I'd recommend you take a few minutes to play with the sample above in VS2005 and Vs2008 to get a feel for what the experience is and see if it impacts you. Let me know!
Jonathan - Re XSLT - this change cooperates with the .NET fx's particular implementation of XSLT transforms.
Brian - "Are you implying there is a chance it won't be fixed if you don't get enough feedback here?"
Technically, no; practically, yes.
My blog post here is just 1 avenue of soliciting feedback.
This regression is non-trivial to fix (it's not just adding a null check), so we need to prioritize it.
I'm implying that bugs that don't impact people are treated with lower priority and take longer (potentially years) to get fixed.
If a lot of people are impacted by this, then it's more important to delay other work to get a fix for this.
If nobody is actually impacted by this, then other work (such as other cool features) may be more important.
If not a single person replied here, it would be tempting to conclude nobody cares and there's no urgency to fix it.
The flip side is that if a lot of different folks reply with different scenarios practically impacted, there's more urgency.
OK, tried it. It will affect me unless I work around it.
Here's a synopsis:
- as noted, it is only the XML editor that is affected. Therefore, getting your file to open in the source editor would be a viable workaround
- .xml files will open with the XML editor. There does not appear to be a way to open these files with the text editor. Open With...Source Editor does not work.
- XML content in a .txt file will open in the text editor and will work
- XML content in an unrecognized file extension will still open in the XML editor by default, because VS will see the XML header and assume the content is XML. However, using the Open With... approach will work.
That last point is my workaround, because my files are actually stored with a '.bo' extension (business object). However, it's still kind of annoying and for people with .xml files I don't see a workaround at all, which is icky.
Mike, I am not sure if you got my yesterday's reply, since it hasn't showed up here.
Shortly, we have a workflow engine at work, that produces xml description of the flow. This is later compiled to IL (to be precise, translated to C# source which gets compiled afterwards). Currently we have a custom tool for debugging purposes, being able to use VS here seems quite interesting to me. On the other hand, if there are other possibilities (like using Mdbg), I don't see that much of a problem for me here.
Ondrej - I only got your 1 reply from stamped at 1:44 AM, which I replied to at 2:35 AM.
Why do you have a custom debuggin tool? Could C#'s #line pragmas solve your problem?
You could always customize MDbg, but it's a toy compared to VS.
When you choose a language to be compiled, emitting symbols is an manageable task.
But often there are scenarios where you cannot implement a compiler of an language which is interpreted just to get debugging work.
I've developed a prototype for a dynamic debugging engine, which allows interpreted languages to emit symbols at runtime without the needing to emit the code for the execution. Basically you implement an interface which includes FileName and LineNumber properties. When these statements are executed, you call a method on the Debug-Engine which takes an instance of this interface.
If you are interested in, I can send you the project. I would also be very interested in what you would think about such an solution and maybe you have some comments for improvement.
GP
After submitting my comment I've seen that it is a bit offtopic. Sorry for that.
GProssliner - Not as off topic as may guess ...
I've worked on a similar solution (based on the technique here ), and first hit this XML problem when applying that solution to an interpreter that took in XML.
If you're running into any problems, feel free to post questions about that... it could always make good blog fodder :)
According to the code-fragment below, the input of the statemachine within your posting isn't actually read from the file, or is it?
new PrintState(10, 2, 2), // 0 <--- start state
new PrintState(30, 0, 3), // 1
new PrintState(20, 1, 6), // 2
I would appreciate your feedback on my implementation. Is there any other way to contact you than the comment-section of your blog? My email: guenter(dot)prossliner(at)world(minus)direct(dot)at
Thanks! | http://blogs.msdn.com/jmstall/archive/2008/02/27/do-you-compile-xml-to-il.aspx | crawl-002 | refinedweb | 1,472 | 74.08 |
also more responsive, since nose begins running tests as soon as the first test module is loaded. See Finding and running tests for more.
Setting up your test environment is easier
nose supports fixtures at the package, module,.), add the attributes by calling the decorator function like so:
def test(): # ... test = with_setup(setup_func, teardown_func)(test)
or by direct assignment:
test.setup = setup_func test.teardown = teardown_func.
About the name
- nose is the least silly short synonym for discover in the dictionary.com thesaurus that does not contain the word 'spy'.
- Pythons have noses
- The nose knows where to find your tests
- Nose Obviates Suite Employment-2007 | https://bitbucket.org/jpellerin/nose/src/bc94a8326d74/?at=0.9.2-stable | CC-MAIN-2014-23 | refinedweb | 105 | 60.31 |
/*Write a program that asks the user how many values will be entered and then reads all of them. Have the program sort the elements using sort() and then
reverse the sorted elements using reverse(). */
import std.stdio; import std.algorithm; void main() { int noValues, i; int[] myArray; write("how many values would you like to enter? "); readf(" %s", &noValues);myArray.length = noValues; // I get a run-time error if I comment this out
while (i < noValues) { write("enter value #", i+1, " "); readf(" %s", &myArray[i]); ++i; } sort(myArray); writeln(myArray); reverse(myArray); writeln(myArray); } Without the line: myArray.length = noValues; I get the run-time error: $ ./exArrays1_1 how many values would you like to enter? 5 core.exception.RangeError@exArrays1_1.d(12): Range violation ---------------- ??:? _d_arrayboundsp [0x461772] ??:? _Dmain [0x44c284]I would have thought that since this is a dynamic array, I don't need to pre-assign its length.
Thanks | https://www.mail-archive.com/digitalmars-d-learn@puremagic.com/msg91316.html | CC-MAIN-2022-33 | refinedweb | 150 | 58.58 |
A for loop is a repetition control structure that allows you to efficiently write a loop that needs to be executed a specific number of times.
A for loop is useful when you know how many times a task is to be repeated.
The syntax of a for loop is −
for(initialization; Boolean_expression; update) { // Statements }
Here is the flow of control in a for loop −
The initialization step is executed first, and only once. This step allows you to declare and initialize any loop control variables and this step ends with a semicolon (;).
Next, the Boolean expression is evaluated. If it is true, the body of the loop is executed. If it is false, the body of the loop will not be executed and control jumps to the next statement past them for a loop.
After the body of the for loop gets executed, the control jumps back up to the update statement. This statement allows you to update any loop control variables. This statement can be left blank with a semicolon at the end.
The Boolean expression is now evaluated again. If it is true, the loop executes and the process repeats (body of the loop, then update step, then Boolean expression). After the Boolean expression is false, the for loop terminates.
Following is an example code of the for loop in Java.
public class Test { public static void main(String args[]) { for(int x = 10; x < 20; x = x + 1) { System.out.print("value of x : " + x ); System.out.print("\n"); } } }
This will produce the following result −
value of x : 10 value of x : 11 value of x : 12 value of x : 13 value of x : 14 value of x : 15 value of x : 16 value of x : 17 value of x : 18 value of x : 19 | https://www.tutorialspoint.com/Java-for-loop | CC-MAIN-2021-39 | refinedweb | 299 | 71.04 |
Rose::HTML::Form::Field - HTML form field base class.
package MyField; use base 'Rose::HTML::Form::Field'; ... my $f = MyField->new(name => 'test', label => 'Test'); print $f->html_field; print $f->xhtml_field; $f->input_value('hello world'); $i = $f->internal_value; print $f->output_value; ...
Rose::HTML::Form::Field is the base class for field objects used in an HTML form. It defines a generic interface for field input, output, validation, and filtering.
This class inherits from, and follows the conventions of, Rose::HTML::Object. Inherited methods that are not overridden will not be documented a second time here. See the Rose::HTML::Object documentation for more information.
A field object provides an interface for a logical field in an HTML form. Although it may serialize to multiple HTML tags, the field object itself is a single, logical entity.
Rose::HTML::Form::Field is the base class for field objects. Since the field object will eventually be asked to serialize itself as HTML, Rose::HTML::Form::Field inherits from Rose::HTML::Object. That defines a lot of a field object's interface, leaving only the field-specific functions to Rose::HTML::Form::Field itself.
The most important function of a field object is to accept and return user input. Rose::HTML::Form::Field defines a data flow for field values with several different hooks and callbacks along the way:
+------------+ / user input / +------------+ | V +------------------+ set -->. . . input_value . input_value() get <--. . +------------------+ | V +------------------+ toggle -->| input_prefilter | trim_spaces() +------------------+ | V +------------------+ define <-->| input_filter | input_filter() +------------------+ | V +----------------------+ . . get <--. input_value_filtered . input_value_filtered() . . +----------------------+ | V +------------------+ | inflate_value | (override in subclass) +------------------+ | V +------------------+ . . get <--. internal_value . internal_value() . . +------------------+ | V +------------------+ | deflate_value | (override in subclass) +------------------+ | V +------------------+ define <-->| output_filter | output_filter() +------------------+ | V +------------------+ . . get <--. output_value . output_value() . . +------------------+
Input must be done "at the top", by calling input_value(). The value as it exists at various stages of the flow can be retrieved, but it can only be set at the top. Input and output filters can be defined, but none exist by default.
The purposes of the various stages of the data flow are as follows:
The value as it was passed to the field.
The input value after being passed through all input filters, but before being inflated.
The most useful representation of the value as far as the user of the Rose::HTML::Form::Field-derived class is concerned. It has been filtered and optionally "inflated" into a richer representation (i.e., an object). The internal value must also be a valid input value.
The value as it will be used in the serialized HTML representation of the field, as well as in the equivalent URI query string. This is the internal value after being optionally "deflated" and then passed through an output filter. This value should be a string or a reference to an arry of strings. If passed back into the field as the input value, it should result in the same output value.
Only subclasses can define class-wide "inflate" and "deflate" methods (by overriding the no-op implementations in this class), but users can define input and output filters on a per-object basis by passing code references to the appropriate object methods.
The prefilter exists to handle common filtering tasks without hogging the lone input filter spot (or requiring users to constantly set input filters for every field). The Rose::HTML::Form::Field prefilter optionally trims leading and trailing whitespace based on the value of the trim_spaces() boolean attribute. This is part of the public API for field objects, so subclasses that override input_prefilter() must preserve this functionality.
In addition to the various kinds of field values, each field also has a name, which may or may not be the same as the value of the "name" HTML attribute.
Fields also have associated labels, error strings, default values, and various methods for testing, clearing, and reseting the field value. See the list of object methods below for the details.
Though Rose::HTML::Form::Field objects inherit from Rose::HTML::Object, there are some semantic differences when it comes to the hierarchy of parent/child objects.
A field is an abstraction for a collection of one or more HTML tags, including the field itself, the field's label, and any error messages. Each of these things may be made up of multiple HTML elements, and they usually exist alongside each other rather than nested within each other. As such, the field itself cannot rightly be considered the "parent" of these elements. This is why the child-related methods inherited from Rose::HTML::Object (children, descendants, etc.) will usually return empty lists. Furthermore, any children added to the list will generally be ignored by the field's HTML output methods.
Effectively, once we move away from the Rose::HTML::Object-derived classes that represent a single HTML element (with zero or more children nested within it) to a class that presents a higher-level abstraction, such as a form or field, the exact population of and relationship between the constituent HTML elements may be opaque.
If a field is a group of sibling HTML elements with no real parent HTML element (e.g., a radio button group), then the individual sibling items will be available through a dedicated method (e.g., radio_buttons).
In cases where there really is a clear parent/child relationship among the HTML elements that make up a field, such as a select box which contains zero or more options or option groups, the children method will return the expected list of objects. In such cases, the list of child objects is usually restricted to be of the expected type (e.g., radio buttons for a radio button group), with all child-related methods acting as aliases for the existing item methods. For example, the add_options method in Rose::HTML::Form::Field::SelectBox does the same thing as add_children. See the documentation for each specific Rose::HTML::Form::Field-derived class for more details.
This module distribution contains classes for most simple HTML fields, as well as examples of several more complex field types. These "custom" fields do things like accept only valid email addresses or dates, coerce input and output into fixed formats, and provide rich internal representations (e.g., DateTime objects). Compound fields are made up of more than one field, and this construction can be nested: compound fields can contain other compound fields. So long as each custom field class complies with the API outlined here, it doesn't matter how complex it is internally (or externally, in its HTML serialization).
(There are, however, certain rules that compound fields must follow in order to work correctly inside Rose::HTML::Form objects. See the Rose::HTML::Form::Field::Compound documentation for more information.)
All of these classes are meant to be a starting point for your own custom fields. The custom fields included in this module distribution are mostly meant as examples of what can be done. I will accept any useful, interesting custom field classes into the
Rose::HTML::Form::Field::* namespace, but I'd also like to encourage suites of custom field classes in other namespaces entirely. Remember, subclassing doesn't necessarily dictate namespace.
Building up a library of custom fields is almost always a big win in the long run. Reuse, reuse, reuse!
Rose::HTML::Form::Field has the following set of valid HTML attributes.
accesskey class dir id lang name onblur onclick ondblclick onfocus onkeydown onkeypress onkeyup onmousedown onmousemove onmouseout onmouseover onmouseup style tabindex title value xml:lang
Constructs a new Rose::HTML::Form::Field object based on PARAMS, where PARAMS are name/value pairs. Any object method is a valid parameter name.
Get or set a boolean value that indicates whether or not the value of any parent field is automatically invalidated when the input value of this field is set. The default is true.
See "parent_field" and "invalidate_value" for more information.
Clears the field by setting both the "value" HTML attribute and the input value to undef. Also sets the is_cleared() flag.
Convenience wrapper for default_value()
Set the default value for the field. In the absence of a defined input value, the default value is used as the input value.
This method is meant to be overridden by a subclass. It should take VALUE and "deflate" it to a form that is a suitable for the output value: a string or reference to an array of strings. The implementation in Rose::HTML::Form::Field simply returns VALUE unmodified.
Get or set a text description of the field. This text is not currently used anywhere, but may be in the future. It may be useful as help text, but keep in mind that any such text should stay true to its intended purpose: a description of the field.
Going too far off into the realm of generic help text is not a good idea since this text may be used elsewhere by this class or subclasses, and there it will be expected to be a description of the field rather than a description of how to fill out the field (e.g. "Command-click to make multiple selections") or any other sort of help text.
It may also be useful for debugging.
Get or set an integer message id for the description.
Get or set an integer message id for the description. When setting the message id, an optional ARGS hash reference should be passed if the localized text for the corresponding message contains any placeholders.
Get or set the field label used when constructing error messages. For example, an error message might say "Value for [label] is too large." The error label will go in the place of the
[label] placeholder.
If no error label is set, this method simply returns the label.
Get or set an integer message id for the error label. When setting the message id, an optional ARGS hash reference should be passed if the localized text for the corresponding message contains any placeholders.
Sets both the input filter and output filter to CODE.
Convenience wrapper for hidden_fields()
Returns one or more Rose::HTML::Form::Field::Hidden objects that represent the hidden fields needed to encode this field's value.
Returns the HTML serialization of the field, along with the HTML error message, if any. The field and error HTML are joined by html_error_separator(), which is "<br>\n" by default.
Returns the error text, if any, as a snippet of HTML that looks like this:
<span class="error">Error text goes here</span>
If the escape_html flag is set to true (the default), then the error text has any HTML in it escaped.
Get or set the string used to join the HTML field and HTML error message in the output of the html() method. The default value is "<br>\n"
Returns the HTML serialization of the field.
Convenience wrapper for html_hidden_fields()
In scalar context, returns the HTML serialization of the fields returned by hidden_fields(), joined by newlines. In list context, returns a list containing the HTML serialization of the fields returned by hidden_fields().
Returns the HTML.
Get or set an HTML prefix that may be displayed before the HTML field. Rose::HTML::Form::Field does not use this prefix, but subclasses might. The default value is an empty string.
Get or set an HTML suffix that may be appended to the HTML field. Rose::HTML::Form::Field does not use this suffix, but subclasses might. The default value is an empty string.
This method is part of the Rose::HTML::Object API. In this case, it simply calls html_field().
This method is meant to be overridden by subclasses. It should take VALUE and "inflate" it to a form that is a suitable internal value. (See the OVERVIEW for more on internal values.) The default implementation simply returns its first argument unmodified.
Get or set the input filter.
Runs VALUE through the input prefilter. This method is called automatically when needed and is not meant to be called by users of this module. Subclasses may want to override it, however.
The default implementation optionally trims leading and trailing spaces based on the value of the trim_spaces() boolean attribute. This is part of the public API for field objects, so subclasses that override input_prefilter() must preserve this functionality.
Get or set the input value.
Returns the input value after passing it through the input prefilter and input filter (if any).
Returns the internal value.
Invalidates the field's output value, causing it to be regenerated the next time it is retrieved. This method is useful if the output value is created based on some configurable attribute of the field (e.g., a delimiter string). If such an attribute is changed, then any existing output value must be invalidated.
Invalidates the field's value, causing the internal and output values to be recreated the next time they are retrieved.
This method is most useful in conjunction with the "parent_field" attribute. For example, when the input value of a subfield of a compound field is set directly, it will invalidate the value of its parent field(s).
Returns true if the field is cleared (i.e., if clear() has been called on it and it has not subsequently been reset() or given a new input value), false otherwise.
Returns false if the internal value contains any non-whitespace characters or if trim_spaces is false and the internal value has a non-zero length, true otherwise. Subclasses should be sure to override this if they use internal values other than strings.
Returns true if the internal value contains any non-whitespace characters, false otherwise. Subclasses should be sure to override this if they use internal values other than strings.
Get or set the field label. This label is used by the various label printing methods as well as in some error messages (assuming there is no explicitly defined error_label. Even if you don't plan to use any of the former, it might be a good idea to set it to a sensible value for use in the latter.
Get or set an integer message id for the field label.
Returns a Rose::HTML::Label object with its
for HTML attribute set to the calling field's
id attribute and any other HTML attributes specified by the name/value pairs in ARGS. The HTML contents of the label object are set to the field's label(), which has its HTML escaped if escape_html is true (which is the default).
Get or set the name of this field from the perspective of the parent_form or parent_field, depending on which type of thing is the direct parent of this field. The local name should not change, regardless of how deeply this field is nested within other forms or fields.
Return the appropriate message object associated with the error id. The error id, message class, and message placeholder values are specified by PARAMS name/value pairs. Valid PARAMS are:
The name of the Rose::HTML::Object::Message-derived class used to store each message. If omitted, it defaults to the localizer's message_class.
If passed a NAME argument, then the local_name is set to NAME and the "name" HTML attribute is set to the fully-qualified field name, which may include dot (".") separated prefixes for the parent forms and/or parent fields.
If called without any arguments, and if the "name" HTML attribute is empty, then the "name" HTML attribute is set to the fully-qualified field name.
Returns the value of the "name" HTML attribute.
Get or set the output filter.
Returns the output value.
Get or set the parent field. The parent field should only be set if the direct parent of this field is another field. The reference to the parent field is "weakened" using Scalar::Util::weaken() in order to avoid memory leaks caused by circular references.
Get or set the parent form. The parent form should only be set if the direct parent of this field is a form. The reference to the parent form is "weakened" using Scalar::Util::weaken() in order to avoid memory leaks caused by circular references.
Get or set the parent group. Group objects are things like Rose::HTML::Form::Field::RadioButtonGroup and Rose::HTML::Form::Field::CheckboxGroup: conceptual groupings that have no concrete incarnation in HTML. (That is, there is no parent HTML tag wrapping checkbox or radio button groups.)
The parent group should only be set if the direct parent of this field is a group object. The reference to the parent group is "weakened" using Scalar::Util::weaken() in order to avoid memory leaks caused by circular references.
Prepares the field for use in a form. Override this method in your custom field subclass to do any work required for each field before each use of that field. Be sure to call the superclass implementation as well. Example:
package MyField; use base 'Rose::HTML::Form::Field'; ... sub prepare { my($self) = shift; # Do anything that needs to be done before each use of this field ... # Call superclass implementation $self->SUPER::prepare(@_); }
Get or set the field's rank. This value can be used for any purpose that suits you, but it is most often used to number and sort fields within a form using a custom compare_fields() method.
Get to set a boolean flag that indicates whether or not a field is "required." See validate() for more on what "required" means.
Reset the field to its default state: the input value and error() are set to undef and the is_cleared() flag is set to false.
Get or set the boolean flag that indicates whether or not leading and trailing spaces should be removed from the field value in the input prefilter. The default is true.
Validate the field and return a true value if it is valid, false otherwise. If the field is
required, then its internal value is tested according to the following rules.
* If the internal value is undefined, then return false.
* If the internal value is a reference to an array, and the array is empty, then return false.
* If trim_spaces() is true (the default) and if the internal value does not contain any non-whitespace characters, return false.
If false is returned due to one of the conditions above, then error() is set to the string:
$label is a required field.
where
$label is either the field's label() or, if label() is not defined, the string "This".
If a custom validator() is set, then
$_ is localized and set to the internal value and the validator subroutine is called with the field object as the first and only argument.
If the validator subroutine returns false and did not set error() to a defined value, then error() is set to the string:
$label is invalid.
where
$label is is either the field's label() or, if label() is not defined, the string "Value".
The return value of the validator subroutine is then returned.
If none of the above tests caused a value to be returned, then true is returned.
Get or set a validator subroutine. If defined, this subroutine is called by validate().
If a VALUE argument is passed, it sets both the input value and the "value" HTML attribute to VALUE. Returns the value of the "value" HTML attribute.
Returns the XHTML serialization of the field, along with the HTML error message, if any. The field and error HTML are joined by xhtml_error_separator(), which is "<br />\n" by default.
Returns the error text, if any, as a snippet of XHTML that looks like this:
<span class="error">Error text goes here</span>
If the escape_html flag is set to true (the default), then the error text has any HTML in it escaped.
Get or set the string used to join the XHTML field and HTML error message in the output of the xhtml() method. The default value is "<br />\n"
Returns the XHTML serialization of the field.
Convenience wrapper for xhtml_hidden_fields()
In scalar context, returns the XHTML serialization of the fields returned by hidden_fields(), joined by newlines. In list context, returns a list containing the XHTML serialization of the fields returned by hidden_fields().
Returns the XHTML.
This method is part of the Rose::HTML::Object API. In this case, it simply calls xhtml_field().. | http://search.cpan.org/~jsiracusa/Rose-HTML-Objects-0.618/lib/Rose/HTML/Form/Field.pm | CC-MAIN-2014-23 | refinedweb | 3,362 | 64 |
This is the 2nd article in my series about building and using binary Python modules in Kodi on Android. In the previous article I described prerequisites for building such modules. In this article I will cover building a simple Python/C extension module for Python in Kodi for Android.
A Simple Python/C Module
This is a very simple "Hello World" type module written in C using a vanilla Python/C API. I took the code from some online tutorial and modified it a bit. It does not have any dependencies except for Python itself and is well suited for my demonstration purposes. Here is the code (
hello.c):
#include <Python.h> static PyObject* get_hello(PyObject* self, PyObject* args) { return Py_BuildValue("s", "Hello World!"); } static PyMethodDef HelloMethods[] = { {"get_hello", get_hello, METH_VARARGS, "Greet somebody."}, {NULL, NULL, 0, NULL} }; PyMODINIT_FUNC inithello(void) { (void) Py_InitModule("hello", HelloMethods); }
This module contains and exposes a single
get_hello function that returns
'Hello World!' Python string. As you can see, the code is quite verbose and rather cryptic for those who are not familiar with Python/C API internals. Later I will demonstrate you how to write the same function in more clear way using convenience C++ libraries: Boost.Python and Pybind11.
Android NDK Project
Our NDK project has the following structure:
\hello_project\ \jni\ \src\ \include\ \lib\
The folder
\jni contain NDK build configuration. This is the only mandatory folder. Other folders may be organized as you like.
The
\src folder contains our source code. In our case it's one
hello.c file.
The
\include folder contains additional C header files. In this case we need only Python 2.6 headers modified as described in the previous article.
The
\lib folder contains additional libraries to link against. In this case we need only
libkodi.so library from Kodi for Android that contains Python symbols.
NDK build configuration consist of 2 files that you need to put inside
\jni subfolder:
Application.mk and
Android.mk.
The
Application.mk contains general build options and can be re-used between your projects. Here's the
Application.mk file for our example project:
NDK_TOOLCHAIN_VERSION := 5 # GCC version APP_OPTIM := release APP_ABI := armeabi-v7a # Define the target architecture to be ARM. APP_CFLAGS := -std=c99 APP_CPPFLAGS := -frtti -fexceptions -std=c++11 APP_PLATFORM := android-19 # Define the target Android version of the native application. APP_STL := gnustl_static APP_LIBCRYSTAX := static APP_CPPFLAGS += -DANDROID
It includes some C++ options that are not used in our simple C-based example but, as I've said, it's a general configuration. And here's our
Android.mk file:
LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := hello # name your module here. LOCAL_SRC_FILES := ../src/hello.c LOCAL_C_INCLUDES := $(LOCAL_PATH)/../include/Python2.6 LOCAL_LDLIBS := -L$(LOCAL_PATH)/../lib -lkodi include $(BUILD_SHARED_LIBRARY)
This file contains build options specific to our project. Brief explanation for the options:
LOCAL_MODULE: the name of our library file ('lib' prefix will be pre-pended).
LOCAL_SRC_FILES: a space-separated list of source C files
LOCAL_C_INCLUDES: a space-separated list of directories containing header files used in our project.
LOCAL_LDLIBS: link flags. Strictly speaking, Android NDK requires you to defile each library dependency as a separate build module, but using link flags is less verbose.
ndk-buildwill issue a warning about this that you can safely ignore.
Now go to our project directory and run
ndk-build command there (it must be added to your system
PATH variable). I there are no issues,
ndk-build will build our extension module
libhello.so and place it into
\libs subfolder created in our project folder.
However, this is not the end of the story. The problem is that import mechanism for binary Python modules in Kodi on Android is seriously broken and you cannot simply use
import hello or even
__import__('hello'). Fortunately, guys from the unofficial Russian Kodi forum have found a magic trick that allows to import binary modules in Python addons for Kodi on Android. I will tell you about this trick in my 3rd article. | https://romanvm.pythonanywhere.com/post/kodi-android-building-and-using-binary-python-extensions-part-2-18/ | CC-MAIN-2021-21 | refinedweb | 663 | 58.69 |
Page Title
In an earlier article I showed you how to override the page title using a NotificationSubscriber. This is convenient if you have the need to change the title for all or many pages in your site. However, when building custom modules, you may have the need to change the title for a specific page only. Fortunately, this is very easy to do in a custom module.
The ContentModule class, the base class for all your custom modules, has a Pageview property of type PageView (note the inconsistency in capitalization, something you need to be aware of when using C#). This PageView property in turn has a Meta property that lets you set the page title through its Title property.
To set the page title in your custom module, you can use the following code:
[AddInName("YourModuleName")] public class Frontend : ContentModule { public override string GetContent() { // Other code here to create HTML Pageview.Meta.Title = "Your custom page title here"; // Other code here to return the HTML } }
When rendered in the browser, your page title ends up between the <title> tags in the <head> section of your page. | https://devierkoeden.com/articles/quick-tips-custom-modules-changing-the-page-title | CC-MAIN-2019-39 | refinedweb | 188 | 56.49 |
NAME
roar_vs_read, roar_vs_write - Read or write data from or to sound server
SYNOPSIS
#include <roaraudio.h> ssize_t roar_vs_read (roar_vs_t * vss, void * buf, size_t len, int * error); ssize_t roar_vs_write(roar_vs_t * vss, const void * buf, size_t len, int * error);
DESCRIPTION
roar_vs_read() reads data from the sound server into buffer buf. roar_vs_write() writes data in buffer buf to the sound server.
PARAMETERS
vss The VS object data is read from or written to. buf The buffer to read to or write from. len The length of the data to be read or written in byte. the amount of data read or written. This can be smaller than the requested size. On error, -1 is returned.
EXAMPLES
FIXME
SEE ALSO
roarvs(7), libroar(7), RoarAudio(7). | http://manpages.ubuntu.com/manpages/precise/man3/roar_vs_write.3.html | CC-MAIN-2016-36 | refinedweb | 122 | 76.52 |
The Data Science Lab
The goal of a time series regression problem is best explained by a concrete example. Suppose you own an airline company and you want to predict the number of passengers you'll have next month based on your historical data. Or maybe you want to forecast your sales amount for the next calendar quarter.
Time series regression problems are usually quite difficult, and there are many different techniques you can use. In this article I'll show you how to do time series regression using a neural network, with "rolling window" data, coded from scratch, using Python.
A good way to see where this article is headed is to take a look at the screenshot in Figure 1 and the graph in Figure 2. The demo program analyzes the number of international airline passengers who travelled each month between January 1949 and December 1960.
The data comes from a benchmark dataset that you can find in many places on the Internet by searching for "airline passengers time series regression." The raw source data looks like:
"1949-01";112
"1949-02";118
"1949-03";132
. . .
"1960-11";390
"1960-12";432
There are 144 data items. The first field is the year and month. The second field is the total number of international airline passengers for the month, in thousands. The demo program creates training data using a rolling window of size 4 to yield 140 training items. The training data is also re-normalized by dividing each passenger count by 100:
( 0) 1.12 1.18 1.32 1.29 1.21
( 1) 1.18 1.32 1.29 1.21 1.35
( 2) 1.32 1.29 1.21 1.35 1.48
( 3) 1.29 1.21 1.35 1.48 1.48
. . .
(139) 6.06 5.08 4.61 3.90 4.32
The first four values on each line are used as predictors. The fifth value is the passenger count to predict. In other words, each set of four consecutive passenger counts is used to predict the next count. The size of the rolling window used here, 4, was determined by trial and error.
The demo creates a neural network with four input nodes, 12 hidden processing nodes, and a single output node. There's just one output node because time series regression predicts just one time unit ahead. The number of hidden nodes in a neural network must be determined by trial and error.
The neural network has (4 * 12) + (12 * 1) = 60 node-to-node weights and (12 + 1) = 13 biases which essentially define the neural network model. Using the rolling window data, the demo program trains the network using the basic stochastic back-propagation algorithm with a learning rate set to 0.01 and a fixed number of iterations set to 10,000. Both the learning rate and number of iterations are free parameters and their values must be determined by experimentation.
The training algorithm uses back-propagation, which is a form of stochastic gradient descent, with a batch size of one item (which is equivalent to "online" training). The error function used is mean squared error because the predicted output value and known correct output value from the training data are numeric. Note that for classification problems, cross entropy error is usually used; cross entropy is not suitable for regression problems.
After training is completed, the demo program calculates and displays a few actual passenger counts and passenger counts predicted by the neural model. This data was used to construct the graph in Figure 2.
When performing time series regression, if you want to compute an accuracy metric you must define exactly what it means for a predicted value to be close enough to the actual value to be considered correct. The demo reckons a predicted passenger count value is correct if it is within 10,000 of the actual count. For example, the first normalized actual passenger count is 1.21 meaning 121,000 passengers. In the code, accuracy calculation checks to see if the normalized predicted passenger count, such as 1.33875, is plus or minus 0.10 of the actual normalized count. This corresponds to 0.10 * 100,000 = 10,000 passengers. Using that accuracy criterion, the neural models predicts passenger counts on the 140-item training set with 70.71 percent accuracy, or 99 out of 140 correct.
This article assumes you have intermediate level skill or better with a C-family programming language and a basic knowledge of neural networks. But regardless of your background and experience, you should be able to follow along without too much difficulty.
Program Structure
The demo program is too long to present in its entirety here, but the complete program is available in the accompanying file download. The structure of the demo program, with a few minor edits to save space, is presented in Listing 1. Note that all normal error checking has been removed, and I indent with two space characters rather than the usual four, to save space.
I used Notepad to edit the demo but most of my colleagues prefer one of the many nice Python editors that are available. The demo begins by importing the numpy, random, and math packages. Coding a neural network from scratch allows you to fully understand exactly what's going on, and allows you to experiment with the code. The downside is the extra time and effort required.
# nn_timeseries.py
# Python 3.x
import numpy as np
import random
import math
def showVector(v, dec): # . . .
def showMatrix(m, dec): # . . .
def showMatrixPartial(m, numRows, dec, indices): # . . .
class NeuralNetwork: # . . .
def main():
print("Begin time series with raw Python demo")
airData = getAirlineData()
np.set_printoptions(formatter = \
{'float': '{: 0.2f}'.format})
print("First four rows of rolling window data: ")
print(airData[range(0,4),])
numInput = 4 # rolling window size
numHidden = 12
numOutput = 1 # predict next passenger count
print("\nCreating a %d-%d-%d neural network " %
(numInput, numHidden, numOutput) )
nn = NeuralNetwork(numInput, numHidden,
numOutput, seed=0)
maxEpochs = 10000
learnRate = 0.01
print("Setting maxEpochs = " + str(maxEpochs))
print("Setting learning rate = %0.3f " % learnRate)
print("Starting training")
nn.train(airData, maxEpochs, learnRate)
print("Training complete ")
print("First few month-actual-predicted: ")
acc = nn.accuracy(airData, 0.10)
print("Accuracy on 140-item data = %0.4f " % acc)
print("End demo")
if __name__ == "__main__":
main()
# end script
The demo program hard-codes the training data into a NumPy array-of-array style matrix with 140 rows and 5 columns:
In a non-demo scenario you'd likely store the data in a text file and then write a helper function to load the data from file into a numpy matrix. The demo loads the data into memory and displays the first few rows:
def main():
print("Begin time series with raw Python demo")
airData = getAirlineData()
np.set_printoptions(formatter = \
{'float': '{: 0.2f}'.format})
print("First four rows of rolling window data: ")
print(airData[range(0,4),])
. . .
Next, the demo creates a neural network using the program-defined NeuralNetwork class:
numInput = 4 # rolling window size
numHidden = 12
numOutput = 1 # predict next passenger count
print("\nCreating a %d-%d-%d neural network " %
(numInput, numHidden, numOutput) )
nn = NeuralNetwork(numInput, numHidden,
numOutput, seed=0)
The NeuralNetwork constructor accepts a seed value which is passed to a member random number generator. The generator is used to initialize the network's weights and bias values, and to scramble the order in which the data is processed during training. Setting the seed ensures that results are reproducible.
Next, the demo trains the neural network:
maxEpochs = 10000
learnRate = 0.01
print("Setting maxEpochs = " + str(maxEpochs))
print("Setting learning rate = %0.3f " % learnRate)
print("Starting training")
nn.train(airData, maxEpochs, learnRate)
The NeuralNetwork train method uses the back-propagation algorithm which requires a learning rate to control how much the weights and biases change on each update. A too-small learning rate could lead to very slow training, but a too-large learning rate could jump over a good solution. Back-propagation is iterative and requires a stopping condition, in this case, a maximum number of iterations. Iterating too many times could lead to over-fitting, a situation where the model predicts very well on the training data, but predicts poorly on new, previously unseen data.
During training, the mean squared error of the neural network, using the current weights and biases, is displayed every 2,000 iterations. Error is somewhat difficult to interpret, but it's important to observe error so you can quickly catch situations where error is not decreasing.
The demo concludes by calculating and displaying a custom prediction accuracy metric:
. . .
print("Training complete ")
print("First few month-actual-predicted: ")
acc = nn.accuracy(airData, 0.10)
print("Accuracy on 140-item data = %0.4f " % acc)
print("End demo")
if __name__ == "__main__":
main()
The second argument passed to the accuracy method, 0.10, is an absolute value meaning a predicted count is considered correct that count is plus or minus 0.10 of the actual (normalized) count. An alternative approach is to code the accuracy method so that the second parameter is interpreted as a percentage. For example, a value of 0.10 means a predicted count is correct if it is between 0.90 times the actual count, and 1.10 times the count.
Regression vs. Classification
The NeuralNetwork class definition contains a computeOutputs method. The key difference between a neural network that performs regression, and one that performs classification, is how the output nodes are computed. The code for method computeOutputs begins with:
def computeOutputs(self, xValues):
hSums = np.zeros(shape=[self.nh], dtype=np.float32)
oSums = np.zeros(shape=[self.no], dtype=np.float32)
. . .
Here local arrays hSums and oSums are scratch arrays that hold the pre-activation sum of products for the hidden nodes and the output nodes respectively. The NumPy zeros function accepts a shape argument that determines the dimensions of the array. The shape value can be a list as shown, or a tuple, or a scalar value.
Next, the pre-activation sums of products for the hidden nodes are computed:
for i in range(self.ni):
self.iNodes[i] = xValues[i]
for j in range(self.nh):
for i in range(self.ni):
hSums[j] += self.iNodes[i] * self.ihWeights[i,j]
for j in range(self.nh):
hSums[j] += self.hBiases[j]
Here the bias values are added in a separate for-loop. You could improve efficiency slightly, at the expense of clarity, by adding the bias values in the previous loop, but any performance gain would be tiny.
Next, the hidden node values are computed by applying the activation function:
for j in range(self.nh):
self.hNodes[j] = self.hypertan(hSums[j])
The demo uses a program defined hyperbolic tangent static function, which is essentially a wrapper around the built-in Python math.tanh function. The hidden node activation function is hard-coded. For time series regression, an alternative to using tanh is to use the logistic sigmoid function.
Next, the pre-activation output node value is computed:
for k in range(self.no):
for j in range(self.nh):
oSums[k] += self.hNodes[j] * self.hoWeights[j,k]
for k in range(self.no):
oSums[k] += self.oBiases[k]
At this point, a neural network classifier would apply softmax activation to the output nodes. However, for neural network regression, no activation is applied. Using no activation is sometimes called identity activation. Note that there is just a single output node so the demo code could have ben refactored so that the hNodes object is a single variable rather than an array.
Method computeOutputrs concludes by transferring the values in the oSums scratch array to the oNodes neural network array:
. . .
self.oNodes = oSums # "Identity activation"
result = np.zeros(shape=self.no, dtype=np.float32)
for k in range(self.no):
result[k] = self.oNodes[k]
return result
The output node value is copied into a local array and returned. This is mostly for calling convenience.
Wrapping Up
The demo program creates a time series regression model but doesn't make a prediction. The last training data item is (6.06, 5.08, 4.61, 3.90, 4.32). To make a prediction for January 1961, the first time step beyond the training data, you'd simply pass (5.08, 4.61, 3.90, 4.32) to method computeOutputs in the trained network.
If you wanted to, you could then take that output value, append it to (4.61, 3.90, 4.32) and then make a prediction for the next time step. You could repeat this process as many times as you wish. This process is called extrapolation. However, the further away you get from the training data, the less accurate your predictions will be.
About the Author
Dr. James McCaffrey works for Microsoft Research in Redmond, Wash. He has worked on several Microsoft products including Azure and Bing. James can be reached | https://visualstudiomagazine.com/articles/2018/02/02/neural-network-time-series.aspx | CC-MAIN-2022-21 | refinedweb | 2,158 | 57.37 |
Task Parallelism (Concurrency Runtime)
This document describes the role of tasks and task groups in the Concurrency Runtime. A task is a unit of work that performs a specific job. A task typically runs in parallel with other tasks and can be decomposed into additional, more fine-grained, tasks. A task group organizes a collection of tasks.
Use tasks when you write asynchronous code and want some operation to occur after the asynchronous operation completes. For example, you might use a task to asynchronously read from a file and a continuation task, which is explained later in this document, to process the data after it becomes available. Conversely, use tasks groups to decompose parallel work into smaller pieces. For example, suppose you have a recursive algorithm that divides the remaining work into two partitions. You can use task groups to run these partitions concurrently, and then wait for the divided work to complete.
Tip
When you want to apply the same routine to each element of a collection in parallel, use a parallel algorithm, such as concurrency::parallel_for, instead of a task or task group. For more information about parallel algorithms, see Parallel Algorithms.
Key Points
When you pass variables to a lambda expression by reference, you must guarantee that the lifetime of that variable persists until the task finishes.
Use tasks (the concurrency::task class) when you write asynchronous code.
Use task groups (such as the concurrency::task_group class or the concurrency::parallel_invoke algorithm) when you need to decompose parallel work into smaller pieces and then wait for those smaller pieces to complete.
Use the concurrency::task::then method to create continuations. A continuation is a task that runs asynchronously after another task completes. You can connect any number of continuations to form a chain of asynchronous work.
A task-based continuation is always scheduled for execution when the antecedent task finishes, even when the antecedent task is canceled or throws an exception.
Use concurrency::when_all to create a task that completes after every member of a set of tasks completes. Use concurrency::when_any to create a task that completes after one member of a set of tasks completes.
Tasks and task groups can participate in the PPL cancellation mechanism. For more information, see Cancellation in the PPL.
To learn how the runtime handles exceptions that are thrown by tasks and task groups, see Exception Handling in the Concurrency Runtime.
In this Document
Using Lambda Expressions
The task Class
Continuation Tasks
Value-Based Versus Task-Based Continuations
Composing Tasks
The when_all Function
The when_any Function
Delayed Task Execution
Task Groups
Comparing task_group to structured_task_group
Example
Robust Programming
Using Lambda Expressions
Lambda expressions are a common way to define the work that is performed by tasks and task groups because of their succinct syntax. Here are some tips on using them:
Because tasks typically run on background threads, be aware of the object lifetime when you capture variables in lambda expressions. When you capture a variable by value, a copy of that variable is made in the lambda body. When you capture by reference, a copy is not made. Therefore, ensure that the lifetime of any variable that you capture by reference outlives the task that uses it.
When you pass a lambda expression to a task, don’t capture variables that are allocated on the stack by reference.
Be explicit about the variables you capture in lambda expressions to help you identify what you’re capturing by value versus by reference. For this reason we don’t recommend that you use the [=] or [&] options for lambda expressions.
One common pattern is when one task in a continuation chain assigns to a variable, and another task reads that variable. You can’t capture by value because each continuation task would hold a different copy of that variable. For stack-allocated variables, you also can’t capture by reference because the variable may no longer be valid.
To solve this problem, use a smart pointer, such as std::shared_ptr, to wrap the variable and pass the smart pointer by value. By doing so, the underlying object can be assigned to and read from and will outlive the tasks that use it. Use this technique even when the variable is a pointer or a reference-counted handle (^) to a Windows Runtime object. Here’s a basic example:
// lambda-task-lifetime.cpp // compile with: /EHsc #include <ppltasks.h> #include <iostream> #include <string> using namespace concurrency; using namespace std; task<wstring> write_to_string() { // Create a shared pointer to a string that is // assigned to and read by multiple tasks. // By using a shared pointer, the string outlives // the tasks, which can run in the background after // this function exits. auto s = make_shared<wstring>(L"Value 1"); return; }); } int wmain() { // Create a chain of tasks that work with a string. auto t = write_to_string(); // Wait for the tasks to finish and print the result. wcout << L"Final value: " << t.get() << endl; } /* Output: Current value: Value 1 Current value: Value 2 Final value: Value 3 */
For more info on lambda expressions, see Lambda Expressions in C++.
[Top]
The task Class
You can use the concurrency::task class to compose tasks into a set of dependent operations. This composition model is supported by the notion of continuations. A continuation enables code to be executed when the previous, or antecedent, task completes. The result of the antecedent task is passed as the input to the one or more continuation tasks. When an antecedent task completes, any continuation tasks that are waiting on it are scheduled for execution. Each continuation task receives a copy of the result of the antecedent task. In turn, those continuation tasks may also be antecedent tasks for other continuations, thereby creating a chain of tasks. Continuations help you create arbitrary-length chains of tasks that have specific dependencies among them. In addition, a task can participate in cancellation either before a tasks starts or in a cooperative manner while it is running. For more information about this cancellation model, see Cancellation in the PPL.
task is a template class. The type parameter T is the type of the result that is produced by the task. This type can be void if the task does not return a value. T cannot use the const modifier.
When you create a task, you provide a work function that performs the task body. This work function comes in the form of a lambda function, function pointer, or function object. To wait for a task to finish without obtaining the result, call the concurrency::task::wait method. The task::wait method returns a concurrency::task_status value that describes whether the task was completed or canceled. To get the result of the task, call the concurrency::task::get method. This method calls task::wait to wait for the task to finish, and therefore blocks execution of the current thread until the result is available.
The following example shows how to create a task, wait for its result, and display its value. The examples in this documentation use lambda functions because they provide a more succinct syntax. However, you can also use function pointers and function objects when you use tasks.
// basic-task.cpp // compile with: /EHsc #include <ppltasks.h> #include <iostream> using namespace concurrency; using namespace std; int wmain() { // Create a task. task<int> t([]() { return 42; }); // In this example, you don't necessarily need to call wait() because // the call to get() also waits for the result. t.wait(); // Print the result. wcout << t.get() << endl; } /* Output: 42 */
The concurrency::create_task function enables you to use the auto keyword instead of declaring the type. For example, consider the following code that creates and prints the identity matrix:
// create-task.cpp // compile with: /EHsc #include <ppltasks.h> #include <string> #include <iostream> #include <array> using namespace concurrency; using namespace std; int wmain() { task<array<array<int, 10>, 10>> create_identity_matrix([] { array<array<int, 10>, 10> matrix; int row = 0; for_each(begin(matrix), end(matrix), [&row](array<int, 10>& matrixRow) { fill(begin(matrixRow), end(matrixRow), 0); matrixRow[row] = 1; row++; }); return matrix; }); auto print_matrix = create_identity_matrix.then([](array<array<int, 10>, 10> matrix) { for_each(begin(matrix), end(matrix), [](array<int, 10>& matrixRow) { wstring comma; for_each(begin(matrixRow), end(matrixRow), [&comma](int n) { wcout << comma << n; comma = L", "; }); wcout << endl; }); }); print_matrix.wait(); } /* Output: */
You can use the create_task function to create the equivalent operation.
auto create_identity_matrix = create_task([] { array<array<int, 10>, 10> matrix; int row = 0; for_each(begin(matrix), end(matrix), [&row](array<int, 10>& matrixRow) { fill(begin(matrixRow), end(matrixRow), 0); matrixRow[row] = 1; row++; }); return matrix; });
If an exception is thrown during the execution of a task, the runtime marshals that exception in the subsequent call to task::get or task::wait, or to a task-based continuation. For more information about the task exception-handling mechanism, see Exception Handling in the Concurrency Runtime.
For an example that uses task, concurrency::task_completion_event, cancellation, see Walkthrough: Connecting Using Tasks and XML HTTP Request (IXHR2). (The task_completion_event class is described later in this document.)
Tip
To learn details that are specific to tasks in Windows Store apps, see Asynchronous programming in C++ and Creating Asynchronous Operations in C++ for Windows Store Apps.
[Top]
Continuation Tasks
In asynchronous programming, it is very common for one asynchronous operation, on completion, to invoke a second operation and pass data to it. Traditionally, this is done by using callback methods. In the Concurrency Runtime, the same functionality is provided by continuation tasks. A continuation task (also known just as a continuation) is an asynchronous task that is invoked by another task, which is known as the antecedent, when the antecedent completes. By using continuations, you can:
Pass data from the antecedent to the continuation.
Specify the precise conditions under which the continuation is invoked or not invoked.
Cancel a continuation either before it starts or cooperatively while it is running.
Provide hints about how the continuation should be scheduled. (This applies to Windows Store apps only. For more information, see Creating Asynchronous Operations in C++ for Windows Store Apps.)
Invoke multiple continuations from the same antecedent.
Invoke one continuation when all or any of multiple antecedents complete.
Chain continuations one after another to any length.
Use a continuation to handle exceptions that are thrown by the antecedent.
These features enable you to execute one or more tasks when the first task completes. For example, you can create a continuation that compresses a file after the first task reads it from disk.
The following example modifies the previous one to use the concurrency::task::then method to schedule a continuation that prints the value of the antecedent task when it is available.
// basic-continuation.cpp // compile with: /EHsc #include <ppltasks.h> #include <iostream> using namespace concurrency; using namespace std; int wmain() { auto t = create_task([]() -> int { return 42; }); t.then([](int result) { wcout << result << endl; }).wait(); // Alternatively, you can chain the tasks directly and // eliminate the local variable. /*create_task([]() -> int { return 42; }).then([](int result) { wcout << result << endl; }).wait();*/ } /* Output: 42 */
You can chain and nest tasks to any length. A task can also have multiple continuations. The following example illustrates a basic continuation chain that increments the value of the previous task three times.
// continuation-chain.cpp // compile with: /EHsc #include <ppltasks.h> #include <iostream> using namespace concurrency; using namespace std; int wmain() { auto t = create_task([]() -> int { return 0; }); // Create a lambda that increments its input value. auto increment = [](int n) { return n + 1; }; // Run a chain of continuations and print the result. int result = t.then(increment).then(increment).then(increment).get(); wcout << result << endl; } /* Output: 3 */
A continuation can also return another task. If there is no cancellation, then this task is executed before the subsequent continuation. This technique is known as asynchronous unwrapping. Asynchronous unwrapping is useful when you want to perform additional work in the background, but do not want the current task to block the current thread. (This is common in Windows Store apps, where continuations can run on the UI thread). The following example shows three tasks. The first task returns another task that is run before a continuation task.
// async-unwrapping.cpp // compile with: /EHsc #include <ppltasks.h> #include <iostream> using namespace concurrency; using namespace std; int wmain() { auto t = create_task([]() { wcout << L"Task A" << endl; // Create an inner task that runs before any continuation // of the outer task. return create_task([]() { wcout << L"Task B" << endl; }); }); // Run and wait for a continuation of the outer task. t.then([]() { wcout << L"Task C" << endl; }).wait(); } /* Output: Task A Task B Task C */
Important
When a continuation of a task returns a nested task of type N, the resulting task has the type N, not task<N>, and completes when the nested task completes. In other words, the continuation performs the unwrapping of the nested task.
[Top]
Value-Based Versus Task-Based Continuations
Given a task object whose return type is T, you can provide a value of type T or task<T> to its continuation tasks. A continuation that takes type T is known as a value-based continuation. A value-based continuation is scheduled for execution when the antecedent task completes without error and is not canceled. A continuation that takes type task<T> as its parameter is known as a task-based continuation. A task-based continuation is always scheduled for execution when the antecedent task finishes, even when the antecedent task is canceled or throws an exception. You can then call task::get to get the result of the antecedent task. If the antecedent task was canceled, task::get throws concurrency::task_canceled. If the antecedent task threw an exception, task::get rethrows that exception. A task-based continuation is not marked as canceled when its antecedent task is canceled.
[Top]
Composing Tasks
This section describes the concurrency::when_all and concurrency::when_any functions, which can help you compose multiple tasks to implement common patterns.
The when_all Function
The when_all function produces a task that completes after a set of tasks complete. This function returns a std::vector object that contains the result of each task in the set. The following basic example uses when_all to create a task that represents the completion of three other tasks.
// join-tasks.cpp // compile with: /EHsc #include <ppltasks.h> #include <array> #include <iostream> using namespace concurrency; using namespace std; int wmain() { // Start multiple tasks. array<task<void>, 3> tasks = { create_task([] { wcout << L"Hello from taskA." << endl; }), create_task([] { wcout << L"Hello from taskB." << endl; }), create_task([] { wcout << L"Hello from taskC." << endl; }) }; auto joinTask = when_all(begin(tasks), end(tasks)); // Print a message from the joining thread. wcout << L"Hello from the joining thread." << endl; // Wait for the tasks to finish. joinTask.wait(); } /* Sample output: Hello from the joining thread. Hello from taskA. Hello from taskC. Hello from taskB. */
Note
The tasks that you pass to when_all must be uniform. In other words, they must all return the same type.
You can also use the && syntax to produce a task that completes after a set of tasks complete, as shown in the following example.
auto t = t1 && t2; // same as when_all
It is common to use a continuation together with when_all to perform action after a set of tasks finish. The following example modifies the previous one to print the sum of three tasks that each produces an int result.
// Start multiple tasks. array<task<int>, 3> tasks = { create_task([]() -> int { return 88; }), create_task([]() -> int { return 42; }), create_task([]() -> int { return 99; }) }; auto joinTask = when_all(begin(tasks), end(tasks)).then([](vector<int> results) { wcout << L"The sum is " << accumulate(begin(results), end(results), 0) << L'.' << endl; }); // Print a message from the joining thread. wcout << L"Hello from the joining thread." << endl; // Wait for the tasks to finish. joinTask.wait(); /* Output: Hello from the joining thread. The sum is 229. */
In this example, you can also specify task<vector<int>> to produce a task-based continuation.
If any task in a set of tasks is canceled or throws an exception, when_all immediately completes and does not wait for the remaining tasks to finish. If an exception is thrown, the runtime rethrows the exception when you call task::get or task::wait on the task object that when_all returns. If more than one task throws, the runtime chooses one of them. Therefore, ensure that you observe all exceptions after all tasks complete; an unhandled task exception causes the app to terminate.
Here’s a utility function that you can use to ensure that your program observes all exceptions. For each task in the provided range, observe_all_exceptions triggers any exception that occurred to be rethrown and swallows that exception.
Consider a Windows Store app that uses C++ and XAML and writes a set of files to disk. The following example shows how to use when_all and observe_all_exceptions to ensure that the program observes all exceptions.
To run this example
In MainPage.xaml, add a Button control.
In MainPage.xaml.h, add these forward declarations to the private section of the MainPage class declaration.
In MainPage.xaml.cpp, implement the Button_Click event handler.
In MainPage.xaml.cpp, implement WriteFilesAsync as shown in the example.
Tip
when_all is a non-blocking function that produces a task as its result. Unlike task::wait, it is safe to call this function in a Windows Store app on the ASTA (Application STA) thread.
[Top]
The when_any Function
The when_any function produces a task that completes when the first task in a set of tasks completes. This function returns a std::pair object that contains the result of the completed task and the index of that task in the set.
The when_any function is especially useful in the following scenarios:
Redundant operations. Consider an algorithm or operation that can be performed in many ways. You can use the when_any function to select the operation that finishes first and then cancel the remaining operations.
Interleaved operations. You can start multiple operations that all must finish and use the when_any function to process results as each operation finishes. After one operation finishes, you can start one or more additional tasks.
Throttled operations. You can use the when_any function to extend the previous scenario by limiting the number of concurrent operations.
Expired operations. You can use the when_any function to select between one or more tasks and a task that finishes after a specific time.
As with when_all, it is common to use a continuation that has when_any to perform action when the first in a set of tasks finish. The following basic example uses when_any to create a task that completes when the first of three other tasks completes.
// select-task.cpp // compile with: /EHsc #include <ppltasks.h> #include <array> #include <iostream> using namespace concurrency; using namespace std; int wmain() { // Start multiple tasks. array<task<int>, 3> tasks = { create_task([]() -> int { return 88; }), create_task([]() -> int { return 42; }), create_task([]() -> int { return 99; }) }; // Select the first to finish. when_any(begin(tasks), end(tasks)).then([](pair<int, size_t> result) { wcout << "First task to finish returns " << result.first << L" and has index " << result.second << L'.' << endl; }).wait(); } /* Sample output: First task to finish returns 42 and has index 1. */
In this example, you can also specify task<pair<int, size_t>> to produce a task-based continuation.
Note
As with when_all, the tasks that you pass to when_any must all return the same type.
You can also use the || syntax to produce a task that completes after the first task in a set of tasks completes, as shown in the following example.
auto t = t1 || t2; // same as when_any
Tip
As with when_all, when_any is non-blocking and is safe to call in a Windows Store app on the ASTA thread.
[Top]
Delayed Task Execution
It is sometimes necessary to delay the execution of a task until a condition is satisfied, or to start a task in response to an external event. For example, in asynchronous programming, you might have to start a task in response to an I/O completion event.
Two ways to accomplish this are to use a continuation or to start a task and wait on an event inside the task’s work function. However, there are cases where is it not possible to use one of these techniques. For example, in order to create a continuation, you must have the antecedent task. However, if you do not have the antecedent task, you can create a task completion event and later chain that completion event to the antecedent task when it becomes available. In addition, because a waiting task also blocks a thread, you can use task completion events to perform work when an asynchronous operation completes, and therefore free a thread.
The concurrency::task_completion_event class helps simplify such composition of tasks. Like the task class, the type parameter T is the type of the result that is produced by the task. This type can be void if the task does not return a value. T cannot use the const modifier. Typically, a task_completion_event object is provided to a thread or task that will signal it when the value for it becomes available. At the same time, one or more tasks are set as listeners of that event. When the event is set, the listener tasks complete and their continuations are scheduled to run.
For an example that uses task_completion_event to implement a task that completes after a delay, see How to: Create a Task that Completes After a Delay.
[Top]
Task Groups the tasks that run in these groups. The task_handle class encapsulates the code that performs work. Like the task class, the work function comes in the form of a lambda function, function pointer, or function.
Important
The PPL also defines the concurrency::parallel_invoke algorithm, which uses the structured_task_group class to execute a set of tasks in parallel. Because the parallel_invoke algorithm has a more succinct syntax, we recommend that you use it instead of the structured_task_group class when you can. The topic Parallel Algorithms describes parallel_invoke in greater detail.
Use parallel_invoke when you have several independent tasks that you want to execute at the same time, and you must wait for all tasks to finish before you continue. This technique is often referred to as fork and join parallelism..
[Top]
Comparing task_group to structured_task_group
Although we recommend that you use task_group or parallel_invoke instead of the structured_task_group class, there are cases where you, if you run additional tasks on a structured_task_group object after you call the concurrency::structured_task_group::wait or concurrency::structured_task_group::run_and_wait methods, then the behavior is undefined..
[Top]
Example:
Message from task: Hello Message from task: 3.14 Message from task: 42.
[Top]
Robust Programming
Make sure that you understand the role of cancellation and exception handling when you use tasks,.
[Top]
Related Topics
Reference
task Class (Concurrency Runtime)
task_completion_event Class
structured_task_group Class | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2012/dd492427(v%3Dvs.110) | CC-MAIN-2019-35 | refinedweb | 3,802 | 55.54 |
Follow-up: Razor with F# and Other Languages
- |
-
-
-
-
-
-
-
Read later
My Reading List.
The first challenge is the syntax. Unlike the older ASP-style view engine, the boundaries between view and inline code are quite subtle and it is quite possible that many languages are syntactically incompatible. According to Scott Guthrie, “Razor can conceptually support F# (meaning the syntax is such that it could work well)”. He then turned over the conversation to Marcin Dobosz who explained what would be necessary to actually wire it in.
On the pure Razor side of things (System.Web.Razor.dll), you would need to implement your own classes derived from CodeParser, RazorCodeGenerator, and RazorCodeLanguage (as well as any other utlity classes that are necessary) and then register your file’s extension with RazorCodeLanguage.Languages. We don’t have any tutorials for implementing your own language so you would have to look at the sources to see what the CSharp or VB-related classes do.
On the MVC side (but only if you want support for @model) you would need to derive from MvcWebPageRazorHost and return MVC-specific parser and generator that derive from your basic parser and generator. Once again take a look at the sources of classes in the System.Web.Mvc.Razor namespace. Or you could implement this in your basic parser and generator, making the whole thing only work in MVC projects.
Of course this is just the bare minimum to get Razor working with a new language. If you were going to spend the effort to write the parser and generator, you probably want to take the extra step to create the project, item, and T4 templates needed that Visual Studio needs for a pleasant experience.
The source code for the October 2010 beta of ASP.NET MVC 3 is available on Codeplex. It is released under the “Microsoft Source License for ASP.NET Pre-Release Components”.
Rate this Article
- Editor Review
- Chief Editor Action | https://www.infoq.com/news/2011/01/Razor-Extensions | CC-MAIN-2016-30 | refinedweb | 326 | 54.02 |
Hi
If i start Rhino and press a custom Button which executes a custom Python script I get an error message:
“Message: expected an indent Block …” which is totaly misleading as the error does not have anything to do with indentation. Instead I have to open the Python Editor and click on the left side, first on then on" rhinoscriptsyntax" then on" application" and sometimes also on “Rhino” to load the modules, depending on which module is imported in my script. After that everything works fine. How can I get Rhino to load all the native modules by default?
sorry if this has been discussed before
Ralph
Hi
@ralph_schoeneberg, hard to say what’s wrong without any code to test. Which modules do you import ? By default, Rhino does not import modules automatically and some searchpaths are required so the modules can be imported. Do you see a similar problem when running your script from the python editor ?
Please post the code and probably a screenshot of this dialog:
_EditPythonScript > Tools > Options > Files
Probably by creating a script which is called using a startup command. You can set them up from:
_Options > Rhino Options > General > Command Lists
But i would first try to investigate why loading things from a button fails.
_
c.
The standard library should be referenced by default, you can try to run
help("modules") to output a complete list of available modules on your system:
If you’re not seeing the standard library here, that could be a sign of a botched installation. You can try to restore the defaults paths:
And perhaps inspect the folders where the Rhino IronPython distribution is located to verify that all the modules indeed are there (i.e.
C:\Program Files\Rhinoceros 5.0\Plug-ins\IronPython\Lib)
Hello
Thank you for your answer. I forgot to mention, I am using Rhino 6.
this is a screenshot from my script, nothin fancy, and the search path window.
the error goes away after clicking on “rhinoscriptsyntax->application”, I also clicked on “Restore RhinoScriptSyntaxFiles”
It’s not depending on the script, it is with every script, and the button thing is also not important as it happens in the python editor too.
the command help(“modules”) yields:
no documentation found for ‘modules’
I tried on my second Computer, and there the script works, so it must be a configuration on my side. Although the command help(“modules”) also gives a “no documentation found for ‘modules’”
here i the script as text, if you want to try it out
import rhinoscriptsyntax as rs
if( name == ‘main’ ):
sel = rs.SelectedObjects()[0]
if(sel):
if rs.IsBlockInstance(sel):
BlockInstanceName = rs.BlockInstanceName(sel)
BlockInstanceName = BlockInstanceName.replace(" “,”")
rs.EnableRedraw(False)
filepath = “E:\work\test\fbx\”
filename = filepath + BlockInstanceName
rs.Command("-Export " + filename + “.fbx _Enter”)
rs.EnableRedraw(True)
else:
print “nothing selected” | https://discourse.mcneel.com/t/rhino-does-not-load-native-python-modules-at-start/67358 | CC-MAIN-2022-33 | refinedweb | 474 | 53.51 |
simply baby oil
£13.00 – £26.00
our gentle, original baby oil, calming and soothing for sensitive skin
Our calming baby oil which works to protect and moisturise the skin, by combining nourishing argan oil with soothing chamomile, this oil is perfect for sensitive. Fantastic for massaging babies, a lovely way to bond with your baby.
ingredients
argania spinosa kernel oil, anthemis nobilis flower extract, citral, geraniol, farnesol, linalool, citronellol, d-limonene
import information
This product may not be suitable if your baby has a nut or skin allergy. Avoid getting into the eyes and wash out thoroughly with water if this occurs.
Natalja Ziliajeva –
Really good products. Love the smell of baby oil. But the colour or red ruby lip balm is horrendous, its not ruby but ginger rusty. Other than that all good
Jane –
My baby girl was a few weeks prem and had very dry skin. I used the simply baby oil in her bath every night and before long her skin was healthy and peachy! | https://simplyargan.co.uk/product/simply-baby-oil | CC-MAIN-2021-31 | refinedweb | 169 | 63.09 |
Hi All,
I have tried Extended plugin...download Extended plug-ins for 5th edition ...download from forum Nokia tools
...
Type: Posts; User: parag_purkar; Keyword(s):
Hi All,
I have tried Extended plugin...download Extended plug-ins for 5th edition ...download from forum Nokia tools
...
Then use devices command to set your default SDK. For example:
devices -setdefault
Regards,
Parag Purkar
Hi All,
Any ways just trying with Wiki
Let me try with this !
Configuration for this is tricky one please post your reply on configurations step by step !!!
Many Thanks !!!
Hi Kishor,
Thanks for reply ! But still some questions in mind !
In SDK help ==>
#include <SyncMLClientDS.h>
Link against: smlclient.lib
This item is not part of the S60 5th Edition SDK
Hi,
Is it necessory to have platinum partnership for SyncML in 5th edition ?
Please refer example if any ?
Regards,
Parag Purkar
Hi ,
Please check your building steps once again !!! & tell us about error you are getting on console.
Hi ,
Please check below link !
Check it
Parag
Please try with it
It should work if all settings are correct ...! Please tell abt ur setting more here ..so that every body understand it !
Hi
Please check ur .mbg file entries. This file has been generated, DO NOT MODIFY
some thing like below may solve ur problem ;)
Regards,
Parag Purkar
Did u check wiki if not please check this
Is there have a way to show scrollbar in CEikRichTextEditor automatically?
What "automatically" means ????
Anyways ...Check with CEikScrollBarFrame..
you can creat scrollbar for editor...
check CEikLabel class...
Check on wiki
I am not getting what is "Ring" here ...Anyways
May be This Link will help you !
Please tell us which setting page API are you currently using ...images are also there ...
Use KMsvGlobalOutBoxIndexEntryId you'll get messages in outbox (these are about to be sent)..and delete them...
change the lines:
But here I am not sure that other listeners are triggered...
What panic code you are getting ? Search the meaning of this panic code ....
you means radio buttons ?
Creating_a_radio_button_settings
SMS_Utilities_API
Regards,
Parag
Preethi...
1. Yes you can split the sis filesinto sis files .. As it is already suggested by Yucca ...check ECom
OR
Suppose if main application as SIS and then loading the content then put...
Hi Rahul,
Refer "On-Target Debugging with Carbide.c++.pdf "
For N73 Path for Trk.sisx
Carbide.c++ v1.3\plugins\com.nokia.carbide.trk.support_1.3.0.019\trk\s60\s60_3_0_app_trk_2_8_5.sisx
...
Hi Preethi,
Why it is that much huge ?
should have enough memory to receive the big file before saving it somewhere in the phone. I don't to know how you are planning for it !
Using MMC...
Pls check on Wiki
You mean to say File Server ? | http://developer.nokia.com/community/discussion/search.php?s=0f104fe42f2f25b309a1a50b17296bb1&searchid=1953141 | CC-MAIN-2014-10 | refinedweb | 455 | 79.36 |
's the source for fmt_vuln.c
-----------------------------------------
#include <stdlib.h>
int main(int argc, char *argv[])
{
char text[1024];
static int test_val = -72;
if(argc < 2)
{
printf("Usage: %s <text to print>\n", argv[0]);
exit(0);
}
strcpy(text, argv[1]);
printf("The right way:\n");
// The right wat to print user-controlled input:
printf("%s", text);
// ---------------------------------------------
printf("\nThe wrong way:\n");
// The wrong way to print user-controlled input:
printf(text);
// ---------------------------------------------
printf("\n");
// Debug output
printf("[*] test_val @ 0x%08x = %d 0x%08x\n", &test_val, test_val, test_val);
exit(0);
}
Kind of off the point, but wouldn't:
char text[1024]; .. strcpy(text, argv[1]);
Be ripe for a little buffer overflow action ?
Anyway, I don't have a windows compiler so I can't convert this to the ASM as needed. But looking at the C, if I had to make an educated guess, I would say that the reason for the junk is the program is that it may be trying to overwrite a specific portion of the program to put in your own addresses and the junk is necessary to make it come out right on the stack. Its really hard to say though without seeing it. I recommend you download OllyDbg or IDA, compile the program and load the EXE there and set the breakpoint at the function call, then you can watch it in action.
nebulus, I'm using Gentoo Linux, not Windows (Why did you assume that )
I have already tried viewing stuff using gdb but couldn't get anything. Perhaps its because when I installed Gentoo I put compiler optimizations including -omit-frame-pointer in the make.conf file. Can I disable this while compiling a file, if yes then how?
Here's the ASM created by gcc without the gdb (-g) option:
.file "fmt_vuln.c"
.data
.align 4
.type test_val.0, @object
.size test_val.0, 4
test_val.0:
.long -72
.section .rodata
.LC0:
.string "Usage: %s <text to print>\n"
.LC1:
.string "The right way:\n"
.LC2:
.string "%s"
.LC3:
.string "\nThe wrong way:\n"
.LC4:
.string "\n"
.align 4
.LC5:
.string "[*] test_val @ 0x%08x = %d 0x%08x\n"
.text
.globl main
.type main, @function
main:
pushl %ebp
movl %esp, %ebp
subl $1032, %esp
andl $-16, %esp
movl $0, %eax
addl $15, %eax
addl $15, %eax
shrl $4, %eax
sall $4, %eax
subl %eax, %esp
cmpl $1, 8(%ebp)
jg .L2
subl $8, %esp
movl 12(%ebp), %eax
pushl (%eax)
pushl $.LC0
call printf
addl $16, %esp
subl $12, %esp
pushl $0
call exit
.L2:
subl $8, %esp
movl 12(%ebp), %eax
addl $4, %eax
pushl (%eax)
leal -1032(%ebp), %eax
pushl %eax
call strcpy
addl $16, %esp
subl $12, %esp
pushl $.LC1
call printf
addl $16, %esp
subl $8, %esp
leal -1032(%ebp), %eax
pushl %eax
pushl $.LC2
call printf
addl $16, %esp
subl $12, %esp
pushl $.LC3
call printf
addl $16, %esp
subl $12, %esp
leal -1032(%ebp), %eax
pushl %eax
call printf
addl $16, %esp
subl $12, %esp
pushl $.LC4
call printf
addl $16, %esp
pushl test_val.0
pushl test_val.0
pushl $test_val.0
pushl $.LC5
call printf
addl $16, %esp
subl $12, %esp
pushl $0
call exit
.size main, .-main
.section .note.GNU-stack,"",@progbits
.ident "GCC: (GNU) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3)"
Ah, some progress!! This is very interesting:
The following 2 commands do the same thing, i.e. print 0x0000bbaa to the memory address.
.
So 146 in the second makes sense as compared to 142 in the first because the second does not have 4 bytes of JUNK.
In the first, we have %17x after the first %n because (0xbb - 0xaa) is 17. In the second we achive the same purpose with 12345678912345678 which is 17 bytes.
Now the confusing part is that the following command does not work (segmentation fault) even though it seems it should, looking at the above two commands:
./fmt_vuln `printf "\x70\x97\x04\x08\x71\x97\x04\x08"`%x%x%146x%n%17x%n
This command should do the same thing as the one with 12345678912345678 right!!
Not exactly. Using %x doesnt let you control the bytes it lets you control the # bytes output by converting to hex and padding the parameter thats popped from the format function stack (it will always read sizeof(int) usually 4 bytes from the stack). Dont forget that each time you %x you are changing the format functions stack pointer, and are walking down the stack. So it performs two functions, it allows you to manipulate the number of bytes outputted by the printf (which you use for %n to write) but it also is how you move back (or 'dig up') through the stack to insure your address is being pointed at when the %n writes. The JUNK may be there just so you can pop them off and add 17 to the output counter. Try either: w/o the JUNK or %17x's (just use 17 fillers like above) or: with the pops(%17x's) and the JUNKs added after the addresses. Let me know how it works out, I'm curious.
OH! So %17x would go down the memory only once and pad it, I knew that %x would go down the memory by 4 bytes and it seemed obvious that %17x would thus go down by 17*4 = 68 bytes!!
The two conditions you want (below), as I said in the last post, do work. And now we know why it does.
.
And now we know why the following doesn't work, since the %17x makes the stack grow and hence overshoot the 2nd address.
./fmt_vuln `printf "\x70\x97\x04\x08\x71\x97\x04\x08"`%x%x%146x%n%17x%n
Maestr0 has saved the day. Actually more than a day!
Forum Rules | http://www.antionline.com/showthread.php?266207-format-string-Question/page2 | CC-MAIN-2015-22 | refinedweb | 966 | 79.3 |
Hi,
This is my first post and I have to admit FMOD is a wonderfull piece of software… love it 😀
I’m close to finishing my app but I’m stuck near the end.
I have made an OCX in VB6 with fmodex.dll 0.4.6.12, everthing runs great but when closing the ocx (quiting ie explorer where it runs) VB crashes.
Of course I read many topics on this and tried the FMOD_System_Close(system) but when calling this I receive a error:
FMOD error!(70) This command failed because System::init or System::SetDriver was not called.
Please be gentle, I’m a real newbe with not much experience in coding. I want to know how should exit in order to prevent this.
Kind regards,
ShAdOwHuNtEr
- ShadowHunter asked 12 years ago
- You must login to post comments
Can’t help feeling a little bit dissapointed in this forum 😥
Nobody ever had a similar problem ?
Are you sure that the system variable you use for closing fmod is the same as the one you used for initializing? (it has to be global in your ocx)
- Adion answered 12 years ago
Think last trying doing something similar, it appears vb6 had some strange oppinions when to initialize/terminate usercontrols or not.
Maybe try debugging if it really attempts to load and unload once.
Also don’t unload if it didn’t load.
Hi Adion,
Yes, I used the same, even changed it to "systemID" because some think "system" is a reserved namespace in VB6.
@Controller, indeed there lies the problem, I can use "unload me", it will result in an error, also "End" does noet work for OCX.
How must I terminate/unload a OCX ?
Kind regards,
ShAdOwHuNtEr
VB6 does this automatically when loading/unloading the parent form, but also when showing/hiding in the IDE | https://www.fmod.org/questions/question/forum-21814/ | CC-MAIN-2018-39 | refinedweb | 307 | 70.02 |
I'm making a system where the server asks questions to the client and the client responds. Then the marks out of 3 that the client had got is to be displayed. Everythign in my code works to the way i want it to, except the display of the marks. I have made arrays in which the final mark should relate to. But they always give the same output which is "You got 0/3 correct, Try Harder." instead of whichever one its meant to print out, someone help me, ive spent hours on this. I would also love to ass a password to the system.
PROTOCOL
Code :
import java.net.*; import java.io.*; public class MathsProtocol { private static final int FIRSTstate = 0; private static final int SECONDstate = 1; private static final int THIRDstate = 2; private static final int FOURTHstate = 3; private static final int FIFTHstate = 4; private static final int SIXTHstate = 5; private static final int SEVENTHstate = 6; private static final int NUMques = 7; private int state = FIRSTstate; private int currentQues = 0; private String[] clues = { "You got 0/3 correct, Try Harder.", "You got 1/3 correct, You could do better.", "You got 2/3 correct, Average.", "You got 3/3 correct, You smart pants.",}; public String processInput(String theInput) { String theOutput = null; int Q1Res = 0; int Q2Res = 0; int Q3Res = 0; int TOTALRes = 0; //QUESTION 1----------------------------------------------------------------------------- if (state == FIRSTstate) { theOutput = ("Q1: (A + B)*(A+B)" + " 1.A*A+B*B "+ " 2.A*A+A*B+B*B " + " 3.A*A+2*A*B+B*B "); state = SECONDstate; } else if (state == SECONDstate) { if (theInput.equalsIgnoreCase("3")) { theOutput = "That is the correct answer"; Q1Res++; state = THIRDstate; } else { theOutput = " WRONG!! TRY AGAIN"; } // QUESTION 2----------------------------------------------------------------------------- if (state == THIRDstate) { theOutput = "CORRECT!! Q2: (A+B)*(A-B) =" + " 1) A*A+2*B*B" + " 2) A*A-B*B " + " 3) A*A-2*A*B+B*B"; state = FOURTHstate; } //---------- } else if (state == FOURTHstate) { if (theInput.equalsIgnoreCase("2")) { theOutput = "That is the correct answer"; state = FIFTHstate; } else { theOutput = " WRONG!!Try again "; state = FOURTHstate; } } //QUESTION 3----------------------------------------------------------------------------- if (state == FIFTHstate) { theOutput = ("CORRECT!! Q3: sin(x)*sin(x) + cos(x)*cos(x) 1? 2? or 3?"); state = SIXTHstate; } else if (state == SIXTHstate) { if (theInput.equalsIgnoreCase("1")) { theOutput = "That is the correct answer, go again? (y/n)"; Q3Res++; state = SEVENTHstate; } else { theOutput = " WRONG!! TRY AGAIN"; state = SIXTHstate; } TOTALRes = Q1Res; theOutput = ("you got")+clues[Q1Res+Q2Res]; //--------------- } else if (state == SEVENTHstate) { if (theInput.equalsIgnoreCase("y")) { theOutput = "Q1: (A + B)*(A+B)" + " 1.A*A+B*B "+ " 2.A*A+A*B+B*B " + " 3.A*A+2*A*B+B*B "; if (currentQues == (NUMques - 1)) currentQues = 0; else currentQues++; state = SECONDstate; } else { theOutput = "Bye."; state = FIRSTstate; } } return theOutput; } } | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/24802-client-protocol-server-printingthethread.html | CC-MAIN-2014-10 | refinedweb | 446 | 66.23 |
Ok, so I redirected User docs to a new share on a DFS namespace. From my client I can ping both Servers in the namespace. On server one I can use the UNC path:
\\server1\users
and hit the share
on server 2 I can only hit it by using
\\IPAddress\users
I verified DNS is fine, but when I try
\\server2\users
it shows this screen shot.
What could be the issue? I have a few other interesting screens to post as well.
May 4, 2010 at 7:51 UTC
So stupid. Check this out. . . The firewall is off but I still had to add an exception for remote management and routing and remote access. Once I did that, it took me there via UNC just fine. WTH?
13 Replies
May 4, 2010 at 7:41 UTC
When I hit:
\\domain-name\users
It only shows my user's folder with docs, not everyone, but if I hit:
\\server1\users
or
\\server 2 ip add\users
It shows all users correctly. This is weird to me as I have set the shares to everyone has full control (to try and solve this), and they are redirected via GPO to:
\\domain-name\users
not to:
\\server1or2\users
Another screen under the DFS on server 2:
May 4, 2010 at 7:42 UTC
I'm afraid I may have setup the DFS in the wrong order or something because it is shared but will not publish to the namespace. Geez
May 4, 2010 at 7:43 UTC
When I open up \\domain-name
The folder shows up in the namespace
May 4, 2010 at 7:44 UTC
Also I noticed a few users docs redirection creates the folder, but inside there are not docs, just a few temp files
I.E.
FRDxx.tmp
0KB
May 4, 2010 at 7:48 UTC
Actually I think the one user with the .tmp files simply hasn't rebooted. I just checked and she is the ONLY user with this issue. A week without a reboot. Geez again hehe
May 4, 2010 at 7:51 UTC
So stupid. Check this out. . . The firewall is off but I still had to add an exception for remote management and routing and remote access. Once I did that, it took me there via UNC just fine. WTH?
May 4, 2010 at 7:53 UTC
I set it to notify me when it blocks something (even though it was off) boom. Gave me the answer. How dumb. A firewall that doesn't turn off. Thanks M$!!
May 4, 2010 at 8:18 UTC
Well, glad we could help!! ;)
May 4, 2010 at 8:33 UTC
Lol, yeah sorry. I tend to jump the gun on posting questions to resolve it quickly and you guys are certainly a valuable resource for me. I was fighting it since Friday though. Its always the simple stuff that nails you.
May 4, 2010 at 9:43 UTC
Quite simply the best/funniest thread I've read in a while - quick question Trivious - Do you walk around having conversations with yourself?
May 4, 2010 at 9:48 UTC
Many times when I troubleshoot I do hehe. It helped me with the logic for some reason. Plus, I don't want others wasting time when posting something I have already tried. Also if someone else runs into an issue I have posted about, it helps them follow the trail and makes it easier on them to see my though process. I think its a good policy and I do this on lots of threads for that reason.
May 4, 2010 at 2:27 UTC
Hey, it's better than posting a question then never being heard from again. I'm sure someone will find this post very useful at some point.
May 5, 2010 at 6:59 UTC
Hey, it's better than posting a question then never being heard from again. I'm sure someone will find this post very useful at some point.
Where's the SW auto-converter to "How-To" button lol? | https://community.spiceworks.com/topic/97505-cannot-hit-share-on-dfs-on-pdc-but-i-can-ping-the-dc-and-the-folder-is-shared | CC-MAIN-2016-50 | refinedweb | 681 | 88.67 |
Recent:
Archives:
As an application developer, how and where you put the preferences vary depending on your application, the platform it runs on, and the programming language. If your application will run only on Windows, then Registry is the place to put the preferences. Developers writing applications written in portable C/C++, on the other hand, usually put their preferences in files. Bigger, server-side applications might store them in a database (although usually something needs to be stored in a file -- the connection string to the database, for example).
Therefore, when designing your application, you'll always face the same preference management questions: Where do we store them? How? Is maintaining them easy enough? Can we easily move them to another platform when necessary? If Java is your programming language, you can follow several approaches in answering these questions. I'll compare the traditional approaches, then discuss in detail the latest option -- J2SE (Java 2, Standard Edition) 1.4's new Preferences API. Enjoy!
Ahhh -- good old
Properties. It has been around since Java's beginning, JDK 1.0 to be exact. It's easy to use: just put the preferences into the
Properties object and store them in a file. Later, when you need to get them back, load it from the file, and voila, you have a
Properties object containing the values you previously stored in the file. It's as simple as that. What could possibly be wrong with
this approach?
For simple applications with just a few preferences,
java.util.Properties works just fine. However, as the application grows in size, with the Properties API you can develop problems such as:
To solve the second problem, you'd typically create your own hierarchy out of the flat namespace. That is, you'd create names like:
myapp.payment.SSLPort=10256
myapp.payment.SETPort=10257
myapp.admin.listenPort=10258
Of course, the Properties API possesses no awareness of this hierarchy, so you'd have to add your own code on top of it. However, this approach also proves unsafe: if someone hand-edits the file and replaces a period with a comma in one of the names above, who knows what will happen to your application?
In other words, managing and maintaining preferences using the Properties API can quickly become a nightmare -- it simply doesn't scale! Besides, using the Properties API this way won't work with a platform without a local disk. Now let's take a look at another approach that solves most of these problems. | http://www.javaworld.com/javaworld/jw-08-2001/jw-0831-preferences.html | crawl-002 | refinedweb | 422 | 56.05 |
Implementing Threaded Comments in LifeFlowdjango (72), lifeflow (20)
Recently I decided that I wanted to add threaded comments to LifeFlow. This is mostly because the commenting system thus far has caused (readers) relatively more suffering than my conscience can cope with. And I hoped that adding threaded comments would somehow balance out this travesty I have inflicted on you dear readers. Probably just a lie I tell myself.
Regardless, I wanted to implement a robust comment threading system. I had two design goals in mind:
- As little extra data to store in the database as possible.
- Keep the computational complexity reasonable (it should be able to easily handle ~500 comments per post).
I thought up a couple of different ways to approach this. One way I didn't think to approach it was storing the hierarchical data in the database. I don't like the idea of storing the full path in the database. It also seems like their method would make rearranging comments rather complex (I readily admit that there aren't many situations that require arbitrary rearranging of comments. Unless you implement voting or have some other form of comment promotion based on merit).
What data do I need to store?
The only extra piece of data I decided to store is the id of the parent comment. (The other piece of data I need is what entry the comment is associated with, but I was already storing that.)
The challenge
Because I am not storing paths in the database, but am instead only storing each node's parent, my real challenge is recreating the structure of the comments using that not-perfectly-suited datum.
A simple, although imperfect answer
So here is my solution (in Django code):
def organize_comments(self): def build_relations(dict, comment=None): if comment is None: id = None else: id = comment.id try: children = dict[id] return [comment, [build_relations(dict, x) for x in children]] except: return comment dict = {None:[]} all = Comment.objects.select_related().filter(entry=self) for comment in all: if comment.parent: id = comment.parent.id else: id = None try: dict[id].append(comment) except KeyError: dict[id] = [comment] relations = build_relations(dict) if len(relations) == 1: return None else: return relations[1]
Basically I build the comment's structure in a hashmap (with key None representing a comment without a parent), and then use that hashmap to recursively build a list containing that structure.
The reason that this can be done fairly efficiently is that the select_related() method (in the Django SVN, not available in 0.96 or earlier) batch fetches related objects, so we will only be hitting the database once, despite all of our tomfoolery.
Its even tail-recursive, which would be meaninful if Python gave a damn about tail recursion. Eventually I'll have to convert the recursion into a loop of some sort.
So, LifeFlow now has support for threaded comments, at least at the model level. I still have to revamp the templates and views to expose the new functionality to users.
Update 1/6/08
Threaded comments are now 90% implemented. Everything is implemented except a visual guide to the threading. That is going to require some custom tagging. You can see the rather awkward implementation of the comment threading in the models.py file from LifeFlow. Its under the organize_comments method of the Entry model.
updated 1/7/08
Have the visuals implemented as well now. All in all its working pretty well. I ended up writing a quick custom tag that is used to throttle the depth of comments.
from django import template register = template.Library()
def boundary(value, arg): """Defines a boundary for an integer. If the value of the integer is higher than the boundary, then the boundary is returned instead.
Example: { { comment.depth|:"4" }} will return 4 if the value of comment.depth is 4 or higher, but will return 1, 2 or 3 if the value of comment.depth is 1, 2 or 3 respectively. """ value = int(value) boundary = int(arg) if value > boundary: return boundary else: return value
register.filter('boundary', boundary)
I use this to throttle the comments to a depth of 5. Anything deeper will be displayed at the level five depth. This makes it much easier to provide the visual look for comments (don't have to do anything dynamic), and seems like a win to me. It also means that comments won't run far away into the right-margin sunset. Also a win.
You can take a look at the css or the template to get a more complete view of the implementation (and the model is still here). The template needs to be aligned a bit more consistently, but then again that template is for my custom remake of the LifeFlow templates, and I don't expect it to recieve a whole lot of use from others... but thats just an excuse.
Upcoming Changes
I will still need to revamp the comments to play nicely with Internet Explorerer (the submission part, that is), and also I am looking into invalidating the cached blog entry whenever a new comment is added, because its kind of frustrating for users to not see their comments until a few minutes have passed. | https://lethain.com/implementing-threaded-comments-lifeflow/ | CC-MAIN-2021-39 | refinedweb | 875 | 55.64 |
operators, comparison operators, ternary operator, switch, for, while, break, continue, do while, polymorphism, arrays, for each, multidimensional arrays and more.
If you like videos like this it helps my Google search rank when you share on Google Plus with a click here [googleplusone]
If you prefer a slower Java tutorial I have one here. Here I show you how to install Eclipse and Java.
Java Programming Code
// A Single line comment /* A * Multiple line * comment */ // You can import libraries with helpful methods using import import java.util.Scanner; import java.util.*; // A class defines the attributes (fields) and capabilities (methods) of a real world object public class Animal { // static means this number is shared by all objects of type Animal // final means that this value can't be changed public static final double FAVNUMBER = 1.6180; // Variables (Fields) start with a letter, underscore or $ // Private fields can only be accessed by other methods in the class // Strings are objects that hold a series of characters private String name; // An integer can hold values from -2 ^ 31 to (2 ^ 31) -1 private int weight; // Booleans have a value of true or false private boolean hasOwner = false; // Bytes can hold the values between -128 to 127 private byte age; // Longs can hold the values between -2 ^ 63 to (2 ^ 63) - 1 private long uniqueID; // Chars are unsigned ints that represent UTF-16 codes from 0 to 65,535 private char favoriteChar; // Doubles are 64 bit IEEE 754 floating points with decimal values private double speed; // Floats are 32 bit IEEE 754 floating points with decimal values private float height; // Static variables have the same value for every object // Any variable or function that doesn't make sense for an object to have should be made static // protected means that this value can only be accessed by other code in the same package // or by subclasses in other packages protected static int numberOfAnimals = 0; // A Scanner object allows you to except user input from the keyboard static Scanner userInput = new Scanner(System.in); // Any time an Animal object is created this function called the constructor is called // to initialize the object public Animal(){ // Shorthand for numberOfAnimals = numberOfAnimals + 1; numberOfAnimals++; int sumOfNumbers = 5 + 1; System.out.println("5 + 1 = " + sumOfNumbers); int diffOfNumbers = 5 - 1; System.out.println("5 - 1 = " + diffOfNumbers); int multOfNumbers = 5 * 1; System.out.println("5 * 1 = " + multOfNumbers); int divOfNumbers = 5 / 1; System.out.println("5 / 1 = " + divOfNumbers); int modOfNumbers = 5 % 3; System.out.println("5 % 3 = " + modOfNumbers); // print is used to print to the screen, but it doesn't end with a newline \n System.out.print("Enter the name: \n"); // The if statement performs the actions between the { } if the condition is true // userInput.hasNextLine() returns true if a String was entered in the keyboard if(userInput.hasNextLine()){ // this provides you with a way to refer to the object itself // userInput.nextLine() returns the value that was entered at the keyboard this.setName(userInput.nextLine()); // hasNextInt, hasNextFloat, hasNextDouble, hasNextBoolean, hasNextByte, // hasNextLong, nextInt, nextDouble, nextFloat, nextBoolean, etc. } this.setFavoriteChar(); this.setUniqueID(); } // It is good to use getter and setter methods so that you can protect your data // In Eclipse Right Click -> Source -> Generate Getter and Setters public String getName() { return name; } public void setName(String name) { this.name = name; } public int getWeight() { return weight; } public void setWeight(int weight) { this.weight = weight; } public boolean isHasOwner() { return hasOwner; } public void setHasOwner(boolean hasOwner) { this.hasOwner = hasOwner; } public byte getAge() { return age; } public void setAge(byte age) { this.age = age; } public long getUniqueID() { return uniqueID; } // Method overloading allows you to accept different input with the same method name public void setUniqueID(long uniqueID) { this.uniqueID = uniqueID; System.out.println("Unique ID set to: " + this.uniqueID); } public void setUniqueID() { long minNumber = 1; long maxNumber = 1000000; // Generates a random number between 1 and 1000000 this.uniqueID = minNumber + (long)(Math.random() * ((maxNumber - minNumber) + 1)); // You can cast from one primitive value into another by putting what you want between ( ) // (byte) (short) (long) (double) // (float), (boolean) & (char) don't work. // (char) stays as a number instead of a character // You convert from a primitive to a string like this String stringNumber = Long.toString(maxNumber); // Byte.toString(bigByte); Short.toString(bigShort); Integer.toString(bigInt); // Float.toString(bigFloat); Double.toString(bigDouble); Boolean.toString(trueOrFalse); // You convert from a String to a primitive like this int numberString = Integer.parseInt(stringNumber); // parseShort, parseLong, parseByte, parseFloat, parseDouble, parseBoolean System.out.println("Unique ID set to: " + this.uniqueID); } public char getFavoriteChar() { return favoriteChar; } public void setFavoriteChar(char favoriteChar) { this.favoriteChar = favoriteChar; } public void setFavoriteChar() { int randomNumber = (int) (Math.random() * 126) + 1; this.favoriteChar = (char) randomNumber; // if then else statement // > < == != >= <= if(randomNumber == 32){ System.out.println("Favorite character set to: Space"); } else if(randomNumber == 10){ System.out.println("Favorite character set to: New Line"); } else { System.out.println("Favorite character set to: " + this.favoriteChar); } //((randomNumber > 97) && (randomNumber < 122)){ System.out.println("Favorite character is a lowercase letter"); } if(((randomNumber > 97) && (randomNumber < 122)) || ((randomNumber > 64) && (randomNumber < 91))){ System.out.println("Favorite character is a letter"); } if(!false){ System.out.println("I turned false to " + !false); } // The ternary operator assigns one or another value based on a condition int whichIsBigger = (50 > randomNumber) ? 50 : randomNumber; System.out.println("The biggest number is " + whichIsBigger); // The switch statement is great for when you have a limited number of values // and the values are int, byte, or char unless you have Java 7 which allows Strings switch(randomNumber){ case 8 : System.out.println("Favorite character set to: Backspace"); break; case 9 : System.out.println("Favorite character set to: Horizontal Tab"); break; case 10 : case 11 : case 12 : System.out.println("Favorite character set to: Something else weird"); break; default : System.out.println("Favorite character set to: " + this.favoriteChar); break; } } public double getSpeed() { return speed; } public void setSpeed(double speed) { this.speed = speed; } public float getHeight() { return height; } public void setHeight(float height) { this.height = height; } protected static int getNumberOfAnimals() { return numberOfAnimals; } // Since numberOfAnimals is Static you must set the value using the class name public void setNumberOfAnimals(int numberOfAnimals) { Animal.numberOfAnimals = numberOfAnimals; } protected static void countTo(int startingNumber){ for(int i = startingNumber; i <= 100; i++){ // continue is used to skip 1 iteration of the loop if(i == 90) continue; System.out.println(i); } } protected static String printNumbers(int maxNumbers){ int i = 1; while(i < (maxNumbers / 2)){ System.out.println(i); i++; // This isn't needed, but if you want to jump out of a loop use break if(i == (maxNumbers/2)) break; } Animal.countTo(maxNumbers/2); // You can return a value like this return "End of printNumbers()"; } protected static void guessMyNumber(){ int number; // Do while loops are used when you want to execute the code in the braces at least once do { System.out.println("Guess my number up to 100"); // If what they entered isn't a number send a warning while(!userInput.hasNextInt()){ String numberEntered = userInput.next(); System.out.printf("%s is not a number\n", numberEntered); } number = userInput.nextInt(); }while(number != 50); System.out.println("Yes the number was 50"); } // This will be used to demonstrate polymorphism public String makeSound(){ return "Grrrr"; } // With polymorphism we can refer to any Animal and yet use overridden methods // in the specific animal type public static void speakAnimal(Animal randAnimal){ System.out.println("Animal says " + randAnimal.makeSound()); } // public allows other classes to use this method // static means that only a class can call for this to execute // void means it doesn't return a value when it finishes executing // This method can except Strings that can be stored in the String array args when it is executed public static void main(String[] args){ Animal theDog = new Animal(); System.out.println("The animal is named " + theDog.getName()); System.out.println(Animal.printNumbers(100)); Animal.countTo(100); Animal.guessMyNumber(); // An array is a fixed series of boxes that contain multiple values of the same data type // How you create arrays // int[] favoriteNumbers; // favoriteNumbers = new int[20]; int[] favoriteNumbers = new int[20]; favoriteNumbers[0] = 100; String[] stringArray = {"Random", "Words", "Here"}; // for(dataType[] varForRow : arrayName) for(String word : stringArray) { System.out.println(word); } // This is a multidimensional array String[][][] arrayName = { { { "000" }, { "100" }, { "200" }, { "300" } }, { { "010" }, { "110" }, { "210" }, { "310" } }, { { "020" }, { "120" }, { "220" }, { "320" } }}; for(int i = 0; i < arrayName.length; i++) { for(int j = 0; j < arrayName[i].length; j++) { for(int k = 0; k < arrayName[i][j].length; k++) { System.out.print("| " + arrayName[i][j][k] + " "); } } System.out.println("|"); } // You can copy an array (stringToCopy, indexes to copy) String[] cloneOfArray = Arrays.copyOf(stringArray, 3); // You can print out the whole array System.out.println(Arrays.toString(cloneOfArray)); // Returns the index or a negative number System.out.println(Arrays.binarySearch(cloneOfArray, "Random")); } }
// Since Cat extends Animal it gets all of Animals fields and methods // This is called inheritance public class Cat extends Animal{ public Cat() { } // Overriding the Animal method public String makeSound(){ return "Meow"; } public static void main(String[] args) { Animal fido = new Dog(); Animal fluffy = new Cat(); // We can have an array of Animals that contain more specific subclasses // Any overridden methods are used instead because of polymorphism Animal[] theAnimals = new Animal[10]; theAnimals[0] = fido; theAnimals[1] = fluffy; System.out.println("Fido says " + theAnimals[0].makeSound()); System.out.println("Fluffy says " + theAnimals[1].makeSound()); // We can also pass subclasses of Animal and they just work speakAnimal(fluffy); } }
// Since Dog extends Animal it gets all of Animals fields and methods // This is called inheritance public class Dog extends Animal{ public Dog() { } // You can override Animal methods public String makeSound(){ return "Woof"; } public static void main(String[] args) { Dog fido = new Dog(); fido.setName("Fido"); System.out.println(fido.getName()); } }
Hey, I really enjoy your videos and I have much respect for you.
How can I go further into learning the java language?
For example how can I train and test what I have in my knowledge?
Thank you 🙂 If you know everything about algorithms, design patterns, refactoring, and object oriented design then I guess you could enter coding competitions. Maybe would be of interest? There are many of them
Hi Derek,
I really like the way you explain. I would like to know and learn the difference between Java 6 and Java 7 and also about new features of Java 8.
Have already given tutorial on this ? If not, please point out the differences / additions / enhancements of Java 6, Java 7, Java 8 if possible. That would be of great help. This is actually a basic interview question for which there is no proper answer on internet.
I hope to see the tutorial soon 🙂
Thank you for wonderful tutorials. Keep up the good work !
Swati.
Hi Swati,
I’ll see what I can do. I basically use Java 7 for everything so far in my tutorials.
Hi Derek – Thanks very much! This is a lot of stuff put togther in best possible way.
Appreciate your Tutorial. 🙂
Thank you 🙂 I’m glad you enjoyed it
Hi Derek. You are doing an awsome work for mankind. ThankYou.
I really want to be a Software Developer in Java and would like to have some advices from you.
Thank you a lot.
Thank you 🙂 I try to do my best. Basically it just comes down to practice. Get very good at solving problems with UML and object oriented design. Learn how to write flexible code with design patterns and refactoring. Learn how to think like a programmer with algorithms. Most importantly think about and create projects that are fun. Stick to it and you’ll do great.
Thank you very much.
You’re very welcome 🙂
hi im a complete newbie where do i start from? Your tutorial was great but alot of it went over my head as expected
That is understandable because this was very fast. I have a longer Java video tutorial here. Feel free to skip parts 8 and 10. Also feel free to ask any questions you have.
nice job derek on both java and php. was wondering if you could do video – address and compare lambda vs closure vs java vs lambda in php?? obviously i havent messed with them much but to understand the difference wud totally help. thanks kindly.
I should do a tutorial in which I cover common CS terms that confuse people. Thank you for the request 🙂
very nice video, a little bit too fast but I think it will help me with my project. I am going to start watching your videos, I am trying to learn java for my class and the other videos that I’ve seen don’t have that much of explanation. I hope your videos make me understand what I could not understand 2 computer science classes before
Thank you 🙂 I have another Java tutorial that covers everything in more depth. I have hundreds of Java tutorials. Take a look at them by putting your mouse over videos in the top menu.
is there any video that you would recommend for me to create a book contact project? I would like to create all these in my project public interface ProjTwo {
public void readInitialFromFile();
public void writeFinalToFile();
public void addContact(Contact c);
public void deleteContact(String nm);
public void showByName(String nm);
public void showByPhoneNumber(long pN);
public void showByComment(String c);
}
with
VectorOfContacts,
OrderedVectorOfContacts,
and Driver2
I made one here with App Inventor. I’ll cover how to do it with Java in my new Android tutorial very soon.
Hello Derek. I have started learning Java from the book Head First Java – 2nd Edition, you recommended on your youtube channel. Can you please tell me the next book / books you recommend to expand my knowledge? I want to learn the language properly before starting with your great playlist of tutorials. Thanks in advance, have a great day.
I’d say the next direction to go is either to learn UML, or Object Oriented Design. A great book is Data Structures and Algorithms in Java as well. I have an algorithms tutorial though here.
Ok, I’ll follow your advice. Btw great job with your tutorials, I have subscribed. You also have my respect.
Thank you 🙂
very comprehensive video tutorials. thanks !
Thank you 🙂
Hi Derek! Awesome job you have done! It’s the best tutorial video of core Java that I have ever gone through! I just finish my study of core Java, and the video content, though only 30 minutes, has covered what I have learned in the past two months , and even more!
It is really impressive video and I appreciate your share and your effort. You do have my respect!
And I have a question here, since I have just finished my study of core Java, what do you think is the next move on the way to become a real java programmer ? I mean, what to learn after core java? Where can I find some project to practice my coding?
Another question Is web development or mobile development a better market to enter in the coming few years?
Thank you!
Thank you for the compliment 🙂 I’d say the next thing you need to work on is problem solving. i cover that in my UML, object oriented design, design patterns and refactoring tutorials. There is a ton of web development work out there. PHP is still the dominate language there though. I have also found a ton of mobile work by focusing on creating private apps for small business owners. It doesn’t seem like I have any competition locally for that. Best of luck.
I found your video on YouTube as I am in a Java class and this has really helped me get a better handle on what I am doing. THANK YOU for the video and this page.
You’re very welcome 🙂 I’m happy that I could help.
Hi. Pasting this code into android studio main activity creates all sorts of errors – a lot based on the static declarations. Would that be expected?
Android Studio is for Android apps only. You can’t write straight Java in AS
No words to appreciate you you are awesome.This saved me a lot of time.Expect more great tutorials from you like this one.
Thank you 🙂 I’ll make a video like this for every language. I’m glad you liked it.
Hi Derek,
Thanks for all of the wonderful java video I enjoy so much. May I know if you are still going to teach hibernate, spring & other related video.
Thanks
You’re very welcome 🙂 Yes I’ll try to fit them in as I make Android games
Hello, Derek,
Thanks a lot for your awesome work!
I am quite new to this so I will really appreciate some advice. The copy/pasted code does not compile on Eclipse, I get the message “Build path specifies execution environment OSGi/Minimum-1.2. There are no JREs installed in the workspace that are strictly compatible with this environment.” To solve this, I have tried new projects and tried different options in the field “use an execution environment JRE”. It didn’t work. Probably the problem comes from other versions of java I previously had on my computer.
If I run java -version on Command Prompt I get 1.8.0_31 version even if I deleted the 8th version and reconnected Eclipse to the 7th version (in Preferences Installed JREs).
I also have from previous trials a program called DrJava and there the code is compiling, but I want to run it in Eclipse so I can follow you better and I also want to understand what I did wrong.
Thanks a lot!
Hello Andrei, Check out this tutorial Install Eclipse for Java | https://www.newthinktank.com/2014/06/java-programming/ | CC-MAIN-2021-10 | refinedweb | 2,944 | 56.25 |
GtkSharp: Widget Colours
Colouring Widgets and Windows
Colouring widgets and windows (which is itself a widget) alluded me for ages due to my basic knowledge of gtk.
Thanks to a tiny example I found by John Bailo it turns out to not be such a tricky thing after all. I’ve recreated it here.
The following code sets up a gtk.window, a DrawingArea widget on top, colours the window and the drawing area both red, then draws a diagonal line across the page.
using System; using Gtk; class ColourExample { Window win; DrawingArea da; static void Main (){ Application.Init (); new ColourExample (); Application.Run (); } ColourExample(){ win = new Window ("Colour Example"); win.SetDefaultSize (400, 300); win.DeleteEvent += OnWinDelete; da = new DrawingArea(); da.ExposeEvent += OnExposed; Gdk.Color col = new Gdk.Color(); Gdk.Color.Parse("red", ref col); win.ModifyBg(StateType.Normal, col); da.ModifyBg(StateType.Normal, col); win.Add (da); win.ShowAll (); } void OnExposed (object o, ExposeEventArgs args) { da.GdkWindow.DrawLine(da.Style.BaseGC(StateType.Normal), 0, 0, 400, 300); } void OnWinDelete (object o, DeleteEventArgs args){ Application.Quit (); } }
The code declares a global Window widget, and a global DrawingArea widget. Then a Gdk.Color object is created. We need to create a Color object so that this can be passed to the .ModifyBg() method of whatever widget is to be coloured. At first, just a blank Gtk.Color object is created.
We still need to tell the object what colour to be. This is done using the Gtk.Color.Parse method. The first parameter is a colour to try and translate. In this example “red” is passed and the method can easily work out what red is. The second parameter is which Gtk.Color object to send the colour to. Of course, this is ‘col’. So in short, declare a Gtk.Color object called ‘col’ and inform it to be a particular colour using Gtk.Color.Parse();
Both widgets are going to be coloured red. I have done this is for two reasons.
One: rarely, does a gtk# application contain just a Window widget and nothing else alone, so we need to put something practical in there and a drawingarea will do fine.
Two: although you really only need to change the background colour in the drawingarea widget to achieve the desired effect, I have elected to colour the window widget as well so that you don’t get an annoying white/grey flick the moment before the drawingarea object appears on the window when the .Show() method is called.
Finally, I used the expose event of the drawingarea widget to draw a simple line to just give the program some real world practicality. | http://www.mono-project.com/docs/gui/gtksharp/widgets/widget-colours/ | CC-MAIN-2018-26 | refinedweb | 441 | 69.68 |
The .NET Stacks #51: 👷♂️ The excitement is Build-ing
This week, we get ready for Microsoft Build, talk more about .NET 6 Preview 4, and more.
NOTE: This is the web version of my weekly newsletter, released on May 24, 2021. To get the issues right away, subscribe at dotnetstacks.com or the bottom of this post.
Happy Monday! I hope you're all doing well.
Here's what's going on this week:
- The big thing: Previewing the big week ahead
- The little things: SecureString meeting its demise, the .NET Coding Pack, web dev news
- Last week in the .NET world
The big thing: Previewing the big week ahead
This week will be a big one: we've got Microsoft Build—probably the last virtual one—which kicks off on Tuesday. In what is surely not coincidental timing, .NET 6 Preview 4 should also be released. (And if that isn't enough, The .NET Stacks turns 1 next Monday. Please, no gifts.)
While Build doesn't carry the same developer excitement as it has in the past, in my opinion—the frenetic pace of .NET keeps us busy throughout the year, and, to me, Build has shifted toward a marketing event. Still, it'll be nice to watch some sessions and see where things are going. You can check out the sessions on the Build website. I'll be keeping an eye on a .NET 6 deep dive, microservices with Dapr, and an Ask the Experts panel with many Microsoft .NET folks.
Next week, we'll also see the release of .NET 6 Preview 4 (finally!). While we'll pore over some of it next week when its formally communicated, the "what's new" GitHub issue really filled up this week and has some exciting updates.
New LINQ APIs
.NET 6 Preview 4 will include quite a few new LINQ APIs.
Index and Range updates
LINQ will now see Enumerable support for
Index and
Range parameters. The
Enumerable.ElementAt method will accept indices from the end of the enumerable, like this:
Enumerable.Range(1, 10).ElementAt(^4); // returns 7
Also, an
Enumerable.Take overload will accept
Range parameters, which allows you to slice enumerable sequences easily.
MaxBy and MinBy
The new
MaxBy and
MinBy methods allow you to use a key selector to find a maximum or minimum method, like this (the example here is taken from the issue):
var people = new (string Name, int Age)[] { ("Tom", 20), ("Dick", 30), ("Harry", 40) }; people.MaxBy(person => person.Age); // ("Harry", 40)
Chunk
A new
Chunk method allows you to chunk elements into a fixed size, like this (again, taken from the issue):
IEnumerable<int[]> chunks = Enumerable.Range(0, 10).Chunk(size: 3); // { {0,1,2}, {3,4,5}, {6,7,8}, {9} }
New DateOnly and TimeOnly structs
We talked about this in a past issue, but we'll see some new
DateOnly and
TimeOnly structs that add to
DateTime support and do not deprecate what already exists. They'll be in the
System namespace, as the others are. For use cases, think of
DateOnly for business days and birthdays and
TimeOnly for things like recurring meetings and weekly business hours.
Writing DOMs with System.Text.Json
So, this is fun: Preview 4 will bring us the ability to use a writable DOM feature with
System.Text.Json. There's quite a few use cases here. A big one is when you want to modify a subset of a large tree efficiently. For example, you'll be able to navigate to a subsection of a large JSON tree and perform operations from that subsection.
There's a lot more with Preview 4. Check out the GitHub issue for the full details. We'll cover more next week.
The little things: SecureString meeting its demise, the .NET Coding Pack, general web dev news
The
SecureString API seems great, it really does. It means well. It allows you to flag text as confidential and provide an extra layer of security. The main driver is to avoid using secrets as plain text in a process's memory. However, this doesn't translate to the OS, even on Windows. Except for .NET Framework, array contents are passed around unencrypted. It does have a shorter lifetime, so there's that—but it isn't that secure. It's easy to screw up and hard to get right.
The .NET team has been trying to phase out
SecureString for awhile in favor of a more flexible
ShroudedBuffer<T> type. This issue comment has all the juicy details of the latest updates.
This week, Scott Hanselman wrote about the .NET Coding Pack for Visual Studio Code. The pack includes an installation of VS Code, the .NET SDK (and adding it to the PATH), and a .NET extension. With the .NET Coding Pack, beginners will be able to work with .NET Interactive notebooks to quickly get started.
While we mostly talk about .NET around here, I think it's important to reach outside our bubble and keep up with web trends. I came across two interesting developments last week.
Google has decided to no longer give Accelerated Mobile Pages (AMP) preferential treatment in its search results (and they are even removing the icon from the results page). Whatever the reason—its controversy, lack of adoption, or Google's anti-trust pressure—it's a welcome step for an independent web. (Now, if only they'd bring back Google Reader.)
In other news, StackBlitz—in cooperation with Google Chrome and Vercel—has launched WebContainers, a way to run Node.js natively in your browser. Simply put, it's providing an online IDE. Thanks to the strides WebAssembly has made in the past few years, it's paved a way for a WASM operating system.
Under the covers, it includes a virtualized network stack that maps to the browser's ServiceWorker API, which enables offline support. It provides a leg up over something like Codespaces or various REPL solutions, which typically need a server. I take exception with StackBlitz saying those solutions "provide a worse experience than your local machine in nearly every way" ... but if you do any JavaScript work, this is an exciting development (especially when dealing with JS's notoriously cumbersome tooling and setup demands).
🌎 Last week in the .NET world
🔥 The Top 3
- Jeremy Likness works with Azure Cosmos DB, EF Core, and Blazor Server.
- Matthew MacDonald asks: will Canvas rendering replace the DOM?
- Eve Turzillo organizes and modifies existing HTTP and HTTPS requests with Fiddler.
📢 Announcements
- Tara Overfield discusses the .NET Framework Cumulative Update for May.
- The Azure SDK team recaps the May release.
- Khalid Abuhakmeh writes about dotMemory support for Linux process dumps.
📅 Community and events
- Microsoft Build kicks off Tuesday. David Ramel previews Build 2021, and Richard Hay links to some digital swag. Also, the Visual Studio team previews their Build sessions.
- The .NET Foundation Outreach Committee announces a new proposal process.
- The .NET Docs Show talks to Steve Gordon about ElasticSearch.NET.
- A busy week with community standups: ASP.NET talks about accessibility, Entity Framework builds modern apps with GraphQL, and .NET Tooling discusses container tools.
🌎 Web development
- Tomasz Pęczek receives JSON Objects Stream (NDJSON) in ASP.NET Core MVC.
- Paul DeVito continues writing about building an event-driven .NET app.
- Anoop Kumar Sharma writes about cookie authentication in ASP.NET Core.
- StackBlitz introduces WebContainers, which provide the ability to run Node.js natively in your browser.
- Adam Storr defines HttpClient test requests by using a bundle.
- Davide Bellone exposes a .NET assembly version from API endpoint routing.
- Kirtesh Shah introduces ASP.NET Core Razor Pages.
- Damien Bowden secures OAuth bearer tokens from multiple identity providers in ASP.NET Core.
- Matthew Jones uses custom user message extension methods in C# and MVC.
🥅 The .NET platform
- Konrad Kokosa asks: why should you care about .NET GC?
- Khalid Abuhakmeh works with .NET console host lifetime events.
- Richard Lander writes about profile-guided optimization (PGO) in .NET 6.
- Nick Randolph writes about the future of Windows development.
⛅ The cloud
- Muhammed Saleem creates business workflows with Azure Logic Apps.
- Abhijit Jana explains how to get started with Azure quickly.
- Daniel Krzyczkowski continues his series on Azure Identity.
- John Reilly uses Azurite and Table Storage in a dev container.
📔 Languages
- Georg Dangl updates the Azure App Service on Linux for Docker with C# webhooks.
- Rick Strahl finds a gotcha with the C# null ? propagator when doing async/await.
- Munib Butt uses Azure Blob Storage in C#.
- Akash Mair writes about data exploration in F#.
🔧 Tools
- Abhijit Jana uses GitHub Actions from Visual Studio.
- Maarten Balliauw writes about the Rider NuGet Credential Provider for JetBrains Space private repos.
- Charles Flatt recaps the git commit/checkout process.
🏗 Design, testing, and best practices
- Dennis Martinez writes about 5 ways to speed up your end-to-end tests.
- Niels Swimberghe bypasses ReCaptcha's in Selenium UI tests.
- Scott Brady writes about authenticated encryption in .NET with AES-GCM.
- Derek Comartin talks about testing your domain when event sourcing.
- Christian Heilmann says you can't automate accessibility testing.
🎤 Podcasts
- Scott Hanselman talks to Rey Bango about developers and security.
- The Azure DevOps Podcast talks to Jeremy Likness about working with data in .NET.
- The .NET Core Show talks about dotnet new3 With Sayed Hashimi.
- The 6-Figure Developer podcast talks to Rich Lander about .NET 6 Preview.
🎥 Videos
- Learn Live continues learning about Git.
- The Let's Learn .NET series talks about building accessible apps. | https://www.daveabrock.com/2021/05/29/dotnet-stacks-51/ | CC-MAIN-2021-39 | refinedweb | 1,581 | 68.57 |
Join devRant
Search - "exams"
-
-
- I FUCKING MADE IT GUYS! I JUST PASSED THE HARDEST COURSE AT UNIVERSITY WHICH IS ABOUT DATA STRUCTURES AND ALGORITHMS! I DRAGGED THIS WITH ME A WHOLE YEAR AND I JUST GOT THE NOTIFICATION THAT I MADE IT. I'M SO FUCKING HAPPY GUYS I CANT BELIEVE IT!!!!!!25
-
- Professor at Uni: "Missing a semicolon on yozr final exam could be a reason to fail that exam. Coding on paper is much better because that is what you will be doing on the job. "
Hate those written Java exams on paper.20
-
-
-
-
-
-
- Final year of my Comp Sci degree and mum still says I shouldn't study on the computer so much.
Last time I checked a sheet of paper doesn't compile C very well4
-
- Me: cool, i organized all my exams to be the most effective, if i study now, i can relax for a few weeks before i have to move.
Interesting side project: hi3
- I used to sit next to my friend Mira in class. I did all the homeworks and extra homeworks, she didn't. I had better grades in intermediate exams. When the final grade came, I had a grade lower than hers.
When the next semester started, I met that professor again. He called me Mira! 😡3
-
- Someone awesome was in charge of correcting the official CS exams at our university (blue: student, red: corrector) :-D3
-
-
-
- "The exams results are in, go to the portal to view them!"
Goes to the portal, not a single result there.
Send an email to let them know we cannot view the results.
Another email gets send, "sorry for the inconvenience, you need to go to website X instead."
Go to website X, surprisingly enough (not) I cannot view my results there either.
Send them another email letting them know.
2 hours later another mail comes in "we cannot get it to work so students can see the results of their exams, we'll send out individual emails wether or not you passed tomorrow"
Top stuff.3
-
- Me: *Has 3 difficult exams to study for and hours of work*
Also me: I should try my hand at encryption in Python.7
-
-
-
- So this is going to be one hell of a FUCKING rant.
Just heard from a friend (doing the same exams I passed, it was going to happen in two groups and he was in the second) that he failed the first out of three phases. And why?
I NEARLY FUCKING FAILED THE FIRST FUCKING PHASE. I GOT A FAIR CHANCE TO MAKE IT RIGHT AND I TOOK THAT CHANCE.
BUT.
MY FRIEND MADE THE SAME MISTAKE. HE MISSED A FUCKING DOCUMENT AND ASKED FOR OVERTIME, WHICH HE GOT AND THEN HE ASKED THE EXAMINOR VERY NICELY IF HE COULD TELL HIM WHAT DOCUMENT HE MISSED (for the record, it was bad documentation and it was not clear that it had to be a seperate document) AND WHAT DID THAT FATHERFUCKING COCKSUCKER SAY?
Hmm hmm hmmm.... nope, that's your responsibillity
ARE YOU FUCKING KIDDING ME? HE HELPED ME BUT NOT HIM? I KNOW YOU LIKE ME MORE THAN HIM BUT IS THAT A MOTHERFUCKING REASON TO LET HIM FUCKING FAIL?!?!?!?
I AM MOTHERFUCKING FUCKITY FUCKING FURIOUS.8
-
- So I passed my exams just now! This is one of the first official recognition of being a capable programmer for me which is a very big deal in my case.
One final thing before i could get my diploma was getting my hours signed of my second internship but they're ignoring me. Explained it to my mentor: "oh fuck that guy, I'll sign it tomorrow, you've made the hours and I'm not going to let some cunt get in your way of getting a diploma!
I fucking love my mentor.6
- Yesterday way my one year devRant anniversary! It's been a good year guys and gals (and whatever genders are out here). I can't be bothered to look up rant counts etc right now because I'm in the middle of my exams and I'm fucking exhausted but a bit thanks to trogus and dfox for creating this awesome platform and also a big shoutout to all the awesome people I met through here and have good contact with :). Keep rocking, devs!2
-
-
-
- GOT AN A+ FOR MY LAST PROJECT OF HIGH SCHOOL!!! SO FUCKING HAPPY!!!
(by the way, we built a search engine for this project. A pretty big and fast one too)10
- What my professor teaches:
This is a mouse and this is a keyboard
What he asks in exams:
How do you use this particular keyboard and this mouse to create a potato?
That is pretty much what always happened.. Even in theoretical exams.. :|5
-
-
- Doctor: "How do you feel?"
Me: "Stressed."
Doctor: "And something other than that?"
Me: "More stress."
Doctor: "Depression maybe?"
Me: "No, I don't have time for that!"
Doctor: "You will have time for iron infusion next week though."
FML.6
- *finishes university exams*
*gets to code after a long time*
*cannot remember what the code does that was written a few weeks back*
Fuck 😓3
- I was so busy studying for final exams that I forgot the Google code-in challenge existed.
I only completed two tasks before I forgot about it haha.
Rip. There's goes my dream of being a finalist. At least there's always next year ;)4
-
-
- Exams are kinda good. They make me realise that i can sit idle for 3 hours without my phone and my computer. 😁😁3
- I passed my exam (did well even), signed up to be a blood donor and landed a job interview all in one day. If all days could be like that.. Now to go look my coworkers in the eyes like I'm not getting ready to jump ship but I'm secretly so excited I can barely sit still 🤭😂
(yes I know a job interview doesn't equal I got the job already, thanks and calm the fuck down, dads 🙄 only an idiot would only have one other thing lined up. Plan C, D and E are on standby too)5
- Our university labs still use computers with 512mb ram, celeron processor for programming and networking courses. Even worse some of the mouse/keyboards/monitors not working and we occasionally have to do exam on those machines ...6
- Today we had our "web technology" University exam. One question was to write a sample html program for our university's website.
I swear I could've built a fully functioning website on MVC and hosted it on some cloud service in far less time than that I spent scribbling 5 pages of writing HTML/CSS/JS so that I can "pass the exam". But nooooo. Our university syllabus takes IE and Java servlets as standard and apparently you get bonus marks if you could implement IE's Active-X on paper.
So much for the future web development4
-
-
-
- Shiiiiit exam from Software, hardware and physics and math and I was doing mg code damn the exams are today craaap ahhhhhhhh4
-
-
-
-
- Already wrote about wk92 but i have to add:
STOP MAKING ME ATTEND COURSES SO I AM EVEN ALLOWED TO TAKE EXAMS.
Like what the hell. You know when it comes to networking i'm doing okayish, coding straight A and then there is maths, let's not talk about it. BUT FFS I WAS NOT ALLOWED TO TAKE 2 OF MY CODING EXAMS THIS SEMESTER CUS I DIDN'T VISIT 2-3 EVENTS OF IT.
I am a coder. I aspire being a coder. I study software development. I just need to prove myself and some dudes can do it. Let me do my thing.
Btw, there weren't any mandatory events for maths. Of course. Why should there be. Yeah okay11
- Fuck me man.
Last week when I needed to study for final exams, I wanted to learn all the codes.
Now that I'm free, all I feel like doing is sitting around and watching TV.
It's literally day #1 of my summer break omfg this is ridiculous12
- Nice! I broke my wrist!
I can barely program straight and I can't go to the gym.
And I'm starting the exams period!
Great, just what I needed! 😃11
- Teacher: The exam subjects will be entirely from what we worked in class.
Narrator: The exam subjects will not be even close to what they worked in class.2
-
- Why does this happen?
Whenever I begin with my exams, I think more about Code and Programming!!
😑😑😑5
- Exams comming up and I'm here wishing life to be codable:
If (atLeastTrying == true)
{
Foreach (char score in subjects)
{
score = 'A'
}
}5
- Doing exams at the moment. Finished phase one out of four successfully at Monday but now stuff is going bad again as usual. Seriously, with me, everything goes perfectly fine until stuff gets official, then code starts failing, self doubt comes up and fair of failure and low self esteem hit me like a bomb.
I'm using my own framework which I actually also use in production and it works fine! But then it has to start to fucking fail at the moment I need it to work the fucking most.
I've worked towards this for five years now, I don't want to fail this! I don't want to disappoint either myself or my friends or my parents.
Fuck.15
- Q:Why should a professor teach things that aren't in the source book?
A: so he can get an exam that not even himself can answer properly!
Q: But why?
A: no one knows the answer. It's one of the philosophic questions that has no answer. But maybe to hurt his students!5
-
-
-
- Exams of two weeks starts tomorrow. We'll be doing a project. Only, 70 percent will be documentation :/. I'm trying to keep my nerves, fair of failure and everything else as low as possible but that's not particularly working anymore 😞10
- Me : "I need to stop geeking out about security because I have exams and need to write a report"
Me 2 hours later : "Ooooo.. what's this cool article about?Let's check it out 😎"2
- 1. Master Angular
2. Finish with final year grad exams
3. Get a good developer job or establish better freelance work state.
4. Build an awesome portfolio.
5. Workout and build some muscles.
6. Get good money and a girlfriend..
😌😌7
- When one of CS teachers went to jail for asking money to pass exams.
Defiently an experience for both him and students.2
- This is a legit question from an exam I just took:
What is CMR?
- A subject we did not discuss.
- Short for Customer Management of Relationships.
- Do you mean CRM?
This is a fucking joke.2
- It's a beautiful autumn day. The sun is out, the sky is blue, it's warm and the trees' leaves are already colored magnificently.
What will I do today? - Exam preparation! :D3
-
-...8
-
- Fuck academics! End semester exams are ucking waste of time ! Just need to study one night before and on exam ay 'vomit' all my memory onto the paper. Fuck it. I AM done.19
-
-
- Fuck exams!
Over thousand slides, 100 marks. Postgraduate term exam. So screwed, can't remember title of unit, rather don't want to.
-
- My emotional state is deteriorating by the day.
And now of all times my exams are also coming up. Only two weeks, I well get through this4
-
-.2
- Exams started today. Came back from paper, went to sleep. Woke up and was told I had mail.
...
Yay!
Thanks, @dfox and @trogus!
- I'm supposed to be studying for end semester exams when I'm spending time on here laughing at life's problems we all face.
I'm officially addicted to devRant. Who else is?3
-
-
-
- Fixing a bug which was open for a few weeks - in front of the exam room - while waiting to be let into the room
Best feeling ever, plus the irritated looks of people while they try to study till the last second
-
- !rant
At my university, we are not allowed to use the internet during exams (duh). But it is disabled using a windows program. I quote from a group chat:
"If you read up a bit, you'll discover that anything works besides Windows"
"Which is basically the story of my life"1
- It might seem stupid but lets do a thread with our watched. I wans never interested about those little accesories until i got mine a few months ago and now i wear it everywhere 😃
True gamechanger when talking about exams and dead battery situations as it has some machanism of charging itself without using batteries and it gloes in the dark 😏16
-
- Next 30 days : no linux, no developing,no designing... only scratching pens on stupid assignments and licking teachers asses for marks. I hate college exams ;___;12
- Can anyone tell me How DEVS study for exams ?
Seriously Getting fucked up having xams back to back 25 days .13
- Time for late night coding, debugging, thinking..been busy since I woke up, work, college, exams, work..work while waiting for an exam..
Coffee - check
Cigarettes - check
Music to keep me motivated - check
Laptop still not lagging - check
Will probably want to sleep in couple of hours - check
- A pratice paper for web development. How does this have anything to do with HTML5? I could have sworn the answer was A.
- God damn it! I have multiple exams that I need to study for, but can't because of my headache that I have had for two weeks now... My mom finally forced me to the doctor, and apparently I have gotten a severe case of migraine... no devices, no books and no lights allowed until it gets better.
Worst part? The exams are next week, and I havent been able to study, not even one page out of hundreds...
Guess who's retaking them this summer... döda mig 😞
-
-
- I have two math final exams on the same day, and one is scheduled 30 minutes after the first one 🙁🙁
I fucking hate math, and this shit sure isn't helping me.7
-
- - study for my final exams
- finish half of my bachelor thesis
- learn Angular for my full-time job
- learn React for my full-time job
- manage to do all of this in less than a month 😬
the rest will come1
-
- ?
- Well... Its 2:20 and i think its time for me to go to sleep, especially since i started using shortcuts ctrl + alt + L and alt + enter while writing speech for my exam.... I dont think that i wrote anything good but what ya gonna do bout
- I want to develop an open source software that lets you create exams and runs on an isolated environment which prevents the users to open any other software thus preventing cheating.5
-
- Anyone ever had exams, where teacher tell you that if you study on said part of the course and you'll pass, then got completely blindsided on the actual exam... Oh the horror 😂6
-
- First time having Baileys and it does not taste like 17% alcohol. But it tastes good enough to get drunk before learning for exams in the next weeks8
- Two assignments due tomorrow and exams looming large. Now would be the perfect time to start learning new framework.1
- That moment when you are reading a book for tomorrow's exams,
suddenly want to find a particular topic,
and instead of turning pages,
say "Ctrl+F" as a Reflex.
Damn you Brain :/
-
- will literally give half a million us dollars to anyone who programms an AI that can study and pass exams for me and that in a manner that I will get away with. Help!
I prefer to pay in bitcoin.10
Physics exam. Seems I was the special guy who did the task in a different (and almost correct way), so my teacher had to share some golden thoughts with me.
Passed anyway xD7
-
- Guys I'm so freaking happy!
My sem end exams are done
I finally got to sleep more than 3 hrs(slept for 10 hrs) after almost over a week
I can finally start with my projects and study something awesome
December is going to be fucking lit you guys!!!4
- How come that during exam sessions my motivation for doing anything besides learning skyrockets? ;___;3
- When I got a 5 on the comp sci AP exam without really trying. Studied hard for the other exams but didn't score nearly as high.3
- Why people suck at theory even being damn good at something... Fucked up computer networks paper !!! :/2
- Goals for 2019
- Land an internship at either of the companies I'm in the final stages of interview for.
- Nail my exams.
- Publish an application to the app store.
- Buy an RTX 2080ti.
- Start a blog.
(Dev-ish goals I guess)
- Finally. Fuck the exams (eventhough they went quite well). At last I can start work on my scripting language14
-
- Got access to root access of school's lab computer.
Saw an account 'tee'(Term end exams) associated with it, copied the hash, ran a dictionary attack and the password was 'tee'
FUCCCKCKCKKK3
- Instead of giving exams at the end of the course, make the courses project based. Most cmps students at my university only memorize information (and even memorize code) just to get a grade, but if instead they had to present a project at the end of the semester they would learn much more and have more experience2
- One project deadline coming up, one presentation to be given next week, and 3 exams on consecutive days.
Scared as fuck. 😭
I hope I pass.
Wish me luck!2
-
- Me to Me - I will Buy 1 Ethereum after Exams ( 8th Dec )
After Exam -
I will buy it soon.
Today -
Ethereum touched $7002
- Self taught JavaScript developer here
Is there any exams or online courses or certification I can take to make my resume more fancier ~8
- I feel like I have to put my personal project on hold because I have to study for exams, but I don't want to stop working on it.
FUCK1
-
-
- (I am attending Uni)
I have reached a point in my life where exams are the least source of stress possible on this planet.
Someone help me with these deadlines please!3
- I have my computer architecture midterm in about 3 hours and I understand almost half of the material. Wish me luck, I'll need it!3
-
- Having exams on paper sounds ... archaic.
I am for paper exams. It's not practically efficient but I find myself learning a lot from it!2
- I haven't slept in 4 days in a row. My eyes hurt, my head is spinning and I cant think straight
I can't sleep because of me failing exams, I fail exams because I can't sleep, and I can't sleep because I'm failing exams.
Great.7
-
- 3 exams over, only two left. One pretty easy and one pretty stupid one. But atleast I can have a drink in the meantime 😋
- When you need to write the code for the exam on paper and your teacher wants you to use a class named CountLoggedinUsersBean
😐😐😐😐😐😐1
- When I have exams, I start to notice how much more functionalities I can add to my web application that I didn't "work on" since the last exams! 😑
- Phew.. My exams are now over (almost...next in 23 days). Now I can continue development. I'm so happy today. But tired too. It feels good coming back here.6
- Just finished all exams... Happy that they are over and sad because now I am six months away from getting kicked out to society3
-
- after weeks nearly months of studying for exams i took my laptop and started linux.
ahhhh. nothing feels better than home
- Final exam gonna start in 2 days and here I'm contemplating what to do after exams(lots of stuff in mind).
I think everyone go through this and usual dilemma for choosing what to do.
So much to learn, so little time. Smh..
- I had to get up early this morning, to write a math exam and now I'm too tired to learn for my computer-organisation exam tomorrow.
How fucking much I love that shit.
- Just arrived at NYU hospital for a 3 days of medical exams. While walking in from my nice day in NYC I get the feeling like I'm turning myself in at prison....
Then on the other hand compared to work... I'm on a 4 day vacation...3
- Being told that we can use Stackoverflow and Google alongside the course text book in our end of year c++ exam is a dream come true1
- !rant; story(well, kinda)
I made it! Hell fucking fuck yeah! I made it, I made it, I made it! I passed all exams! See you in Prague, at ČVUT!(those who happen to study/live there or nearby)
life.setStatus("Winning!");2
- The pain of really wanting to get stuck in with learning python... But having to study instead. *Sigh*2
-
- Not just dev goals.
-Stable relationship with [Redacted]
-Finish my projects
-Get new SSD
-Finally get paid for the DLR project
-Not fucking up my exams
-Finish my blacksmithing idea/project
-Play a good game that's not Destiny2
-Learn python.
-Drivers license
-Think positively for once4
-
- I got exams coming up but all I want to do I get my hands dirty with kotlin, python and learn the java design patterns 😭1
-
-
-
- Tomorrow is my internal exams in college and all I can think about working on my side project, kind of amusing
-
- So I just got the message that I failed my theoretical informatics exam again for the 4th time. I'm so fucking tired from these stupid subjects that I won't ever need in the future, but they stop me from getting my damn degree. And worst of all, I have to wait a whole year to try again. Wasting my time away for nothing...4
- last exam of my degree tomorrow, instead of revision im sat looking on here and taking phone calls. Such procrastination.2
-
- I
- So here I am with just 4 nights left at my disposal, in which I have to make a Major project worth 10 credits for my college degree and simultaneously prepare for my oncoming exams... You might think what was I doing all these days then!
Well did I mention I was busy contributing to Open source community (mozilla to be precise) which I enjoyed doing more than I would like to cram things up for my exams and do major project which my teachers expect to be an out of the box idea.
F@#$! Education system1
-
- Gotta revise MySQL and Java for my IT Matura exam on Wednesday...
Reading/writing from/to files in a Java 8 manner,
LOAD DATA (LOCAL) INFILE "blah.csv" LINES TERMINATED BY "\r\n" DELIMITED BY ";' etc.,
Java Comparator,
clean Eclipse install...
No Internet, I'll have javadoc and mycli though.26
- !rant
So my exams were going from a long time which kept me back from programming. And I started working on something today again, can't believe I love programming so much.
-
-
- !Dev related
So you know when you write an exam.You studied your ass off every day for the last 4 weeks .You see the questions and you like hey I can do like 80-90% stuff.You do the paper.You smile while handing it in
You leave.
Then,you wait.Confidently.In your mind thinking "hey don't fear the maths calculus paper was nice"
You recieve your marks after 2 weeks
You check it,and your heart COMPLETELY SINKS.How on living earth did I get 40%
Idk what it might be, insufficient studying(maybe revising the syllabus three damn times in 4 weeks wasn't enough).or stupid mistakes.or just the fact that it maths(calculus).This is the mid year so this mark doesn't determine if I pass or fail.
I need help,like serious help.I've kind of lost hope right now.If I talk to my parents their only solution is to study more(which clearly isn't doing the trick for the past 3 years in the same course)
I don't know anymore.I just dont.5
-
- I'm in the middle of my exams.
My thoughts go - Work! Keep the fucking As up!
My actions - setting up public dotfiles with more comments than config lines, messing with DevRant, doing pre-calculus (matrixes, complex number shit) from 9th grade
Then had the English exam and realised that I'm fucked.
Again, fuck.
But now it's a Friday!
... And I have 3 major exams next week. I NEED to study for them.
I can already guess what I'm going to be doing during exam study time.
- >
- Shit just failed 5 exams in the same time i guess ill be toxic af today so if snyone sees more of my offensive shitposts feel free to ignkre thx u11
- I took a systems security class when I was in college and the exams were the most difficult ones that I had. We had to do two exams and I felt pretty stupid on both.
Passed the exams but they gave me some doubts about my skills.
- At work me and my colleagues take almost regular smoke breaks out in the balcony. I was a smoker before but I'm afraid my habit has escalated during past few months. Now that I have taken few days off to study for my exams I can't study well. I can't smoke at home. And I can't go outside that much, it's raining by the way. I think I should quit. But right now I'm doomed.5
- i feel like drinking bleach, i cant program because i have exams!!, but i get side tracked from revision and start messing around with linux😢😂. :)
- Time to learn how to write an MVC Windows Console App in C++ in just a few days while also having 1 presentation, 2 technical demos, and 4 exams this week...
- Happy holidays to y'all.
Do you have any goals for 2020? Not necessarily new years resolutions.
My goals are:
* Pass math exams
* Master Rust
* Get back to the gym.5
- Grinding hard passing the exams that make my life a living hell, then finally finish my 10000 side projects. Hopefully make some money of some of them. Also be very cautious about my personal well being and health as it is the most valuable thing.
-
- Who the heck made this concept of exams. Don't wanna study for my sessional which will be in next 10 hrs. I am screwed😭😫2
-
-
-.
- Uni::exams
Mid terms starting from tomorrow with two exams every day.
Wish me luck for Data Structures and COAL on first day3
- >2 exams left till i never have to look at my college again
>Actually studying
>Boss wants me to fuck around with docker
>My vape just broke
>Gotta get an oil change in my car
>Pretty sure im gonna be sick
Fuck this week7
-
- I don't understand written essay exams. That's it.
The thing is how does mugging up a group of questions and getting a good score help the person. Like for real...
Whatever.. Exams about software engineering today and I am on devRant for 2 hours. Great
- College semester exams are going to end after 2 days.....So very much excited to get back to coding after exams.....1
- After an extremely stressful exam period, I find myself logging in to the grading website and just looking at my results with almost erotic pleasure. A few times a day. I guess it's a form of mental masturbation.2
- An eventful day:
Because of my recent amateur thermopaste application onto a heatpipe that connects a laptop CPU and a discrete GPU and *ingenious* HP ProBook engineering my Radeon graphics have fried yesterday.
On the bright side, got the Nougat update for my Samsung S6, with bright hopes that it will help restore the state of an unresponsive fingerprint scanner... nope, that is still broken.
Summer is near, exams finished, time for some DIY on used and abused tech! :D2
- OFFER!!!
3 hours of lifetime for only 1 failed exam!!!
Special: If you buy two you actually can experience your bought lifetime for more than 10 minutes in sequence!!!
OFFER END!!!
-
- - Finish my dev minor hopefully with a 9 average
- Have a great dev internship
- Graduate (as a software dev)
- Have a nice vacations because I finally don't have dev exams anymore
- Get a development job4
- Got an invite from an recruitment agency, went for the exam. Was hoping I get rejected 😐😑😕 ( I never passed an exam). After exams went home.
.
.
Got a message on my phone " You are selected for interview".
.
.
Went there for the interview.
They asked very simple questions.
.
.
2 hours after.
.
.
The agency people calls my name.
.
You are selected for the job.
🙌
Now it has been 3 Years...1
- I'm thinking of taking the LPIC-1 exams and getting certified. My boss has asked for a time frame for studying but I'm unsure how long it would take. Any ideas? I'm at the level where I can just about work my way through a system use basic commands.
- I hate programming exams where you have to had write out code. I always get points off because my hand writing is basically chicken scratches.
-
-
-
-...
- Wherever i starts coding something new
There's assignments to submit,
Sessional exams,
Seminars,
Final exams,
And then there's game of thrones, silicon valley, friends, westworld, stranger things etc etc etc
Then again next semester....1
-
- Apprenticeship end exams were incredibly easy. In parts...
Glad that I'm through it. Now for the practical things. Project and presentation (freeradius on a mesh network from watchguard)
- {
while(time_to_exams > 0){
me.shouldBeLearning(true);
time_to_exams--;
}
public void shouldBeLearning(boolean bool){
if(bool){
should_be_learning = false;
waste_time_on_DevRant = true;
} else {
waste_time_on_DevRant = true;
}
}
}
- Anyone here taking (or taken) the SAT exam? I have got it in a week-ish and am shitting bricks!! Doesn't help being British (thus I never learned anything specific to the SAT) and I *may* have procrastinated... 😳🤦♂️2
- Have lots of interviews ined up for next week(college placements), the major player being Amazon. Any help for preparation and passing written exams?
Thanks.2
- Coding projects are fun, but when you do it for a school project and you have to write a paper for it, it kinda sucks.. :/2
- i can't wait to finish my final exams on the second of october so i can finally give a big middle finger to Accounting
-
-
-
- I just learned that I failed my Matura exams, which means I'm not going to college this year. FUCK2
- Helloooo everyone! I've been afk from devRant for some time because of exams. How have you guys been?
Here's a picture of my favorite program, Letterbomb. Thought I'd include it because I kept thinking about it for a while.3
- !rant
Going out with my mate to celebrate his end of exams when in truth I want to stay at home and work through the androidMVP Dagger2/Retrofit2/RxJava tutorials :/
- What I hate most about studying computing? Getting exams about shit I hate - fucking stats exam tomorrow, wasted my time coding and now I'm afraid Ill fuck up big time1
- My exams in Web Engineering and Distributed Systems at the University were postponed to mid April... Thank God, they would have started next Tuesday and I haven't properly learned. So I got that going for me.3
-
- Goddamn exams >.< I want to code for a new project but I have to learn the whole time cuz I have 3 exams in the next 5 days -.-2
- I love maths so much, but I am at the verge of suicide due to differential equations right now...
Wish I could code something 😟2
- I have a school project that starts in 2 weeks.
Is it normal to ask us to make a report for Monday with the technologies we want to use knowing that we have no information about this project and we still have exams for 1 week ...6
- have my minors from tomorrow, haven't studied a thing, and am addicted to learning Django since morning. I just wish I don't fail😅1
- Writing 4 exams this week in math / computer science, I am super nervous. Any tips to stay calm? :)11
-.
- Every time I learn something new, and get it implemented in/as a Project
Someone help me to start studying for Exams, can't get myself to it 😂
-
- Monday my last exam (maths)...
Will be happy if it’s finished. Will be able to continue developing apps.2
- So my company is forcing me to take this ISTQB exams for testing and QA... after 10 months of employment.... "Thinks to self" I can wing this cant I.
-
- Hi guys! I got an opportunity in one of the biggest consulting/strategy companies in the world, however I need to write the OCA and OCP Java exams to get in. The OCA is in about a month + 2-3 weeks and OCP towards the end of the year. I kinda know Java but the exams seem to be hard. May you please guide/advise on how I can get over this mountain :)? Anything that worked for you? Or did not work?4
-
-
- Doing B.Tech in CSE is so fucking hectic! Almost pulled all nighters for 2 mid semester exams, feeling so fucking sleepy!!
-
-
- Just applied for my next school, I'm going to study application development. Lets hope for the best and hope that I pass my exams and get accepted.1
- private boolean didWakeUpForNothing() {
if (mathTutoring.isClosed()) {
return true;
} else {
studyForExams();
}
}
private void studyForExams() {
feelEmptyInside = cryInShower = true;
}1
-
-
- def examMonth():
for exam in exams:
while days:
if time ≥ week:
pass
elif time == days_3 or time == days_2:
book = open_book()
study(book)
else:
panic_and_devRant()
days = days - 1
def study(book):
see_open_book()
delay(minutes_10)
devRant()
-.
- What I hate the lost about exam season, is the lack of coding... Spend two months cramming all the theoretical parts of computer science, and it just gives no time to code
-
- A particular Computer science lecturer sent me an email for a particular assignment result is posted on a notice board at uni. 2 weeks after exams/ into holiday period. Are u that fucking lazy! Just send me the god damn results. Far out
Top Tags | https://devrant.com/search?term=exams | CC-MAIN-2020-16 | refinedweb | 5,859 | 73.58 |
Introducing netsim-tools Plugins
Remember the BGP anycast lab I described in December 2021? In that blog post I briefly mentioned a problem of extraneous IBGP sessions and promised to address it at a later date. Let’s see how we can fix that with netsim-tools plugin.
We always knew that it’s impossible to implement every nerd knob someone would like to have when building their labs, and extending the tool with Python plugins seemed like the only sane way to go. We added custom plugins to netsim-tools release 1.0.6, but I didn’t want to write about them because we had to optimize the internal data structures first.
Back to the original challenge. In the BGP anycast lab I wanted to have BGP sessions set up like this:
However, netsim-tools tries to do the right thing and creates IBGP full mesh within an AS1. The configured BGP sessions within my lab looked like this:
The IBGP sessions within AS 65101 were never established as the loopback addresses of A1…A3 were not advertised into BGP and thus the anycast routers had no way to reach each other (there are no direct links between them). The lab worked as expected, but I still didn’t like the results.
a1#show ip bgp summary | begin Neighbor Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.0.0.6 4 65101 0 0 1 0 0 never Idle 10.0.0.7 4 65101 0 0 1 0 0 never Idle 10.1.0.14 4 65000 30 25 7 0 0 00:20:05 4
I could see three solutions to that conundrum:
- Remain unhappy with the way the lab works and move on. Not an option.
- Implement a bgp.peering.ibgp (or something similar) nerd knob to enable/disable IBGP sessions. Doable, but we’d have to change all device configuration templates and forever keep track of which devices support this nerd knob. Sounds like a lot of technical debt to solve an edge case.
- Write a plugin that removes unneeded IBGP sessions once the lab topology transformation is complete.
The plugin turned out to be remarkably simple. I imported Python Box2 and netsim-tools API modules.
from box import Box from netsim import api
Next, I added anycast attribute to the list of allowed BGP node attributes in the plugin initialization code. I could do that in the topology file, but I wanted to end with a self-contained module (more about that in a week or so).
def init(topo: Box) -> None: topo.defaults.bgp.attributes.node.append('anycast')
Finally, I wanted to modify the lists of BGP neighbors once the topology transformation has been complete. post_transform seemed the perfect hook to use, and all I had to do was to:
- Find the nodes with bgp.anycast attribute
- Set bgp.advertise_loopback to False (so I can further simplify the topology file)
- Keep only those BGP neighbors where the BGP session type is not
ibgp
def post_transform(topo: Box) -> None: for name,node in topo.nodes.items(): if 'anycast' in node.get('bgp',{}): node.bgp.advertise_loopback = False node.bgp.neighbors = [ n for n in node.bgp.neighbors if n.type != 'ibgp' ]
Let’s unpack that code:
- All plugin hooks are called with a single parameter – current lab topology. When the init hook is called, you’ll be working with the original topology definition, at the post_transform stage the data model has been extensively modified.
- Lab devices are described in the nodes dictionary with the lab topology.
- Each node is a dictionary that contains numerous parameters, including configuration module settings (another dictionary).
- Python Box module is a wonderful tool when you need to traverse deep hierarchies, but it does have a few side effects. With the default settings netsim-tools uses, Python Box automatically creates empty dictionaries when needed. That’s awesome in 90% of the cases, but sometimes I don’t want to get extra dictionaries, so I have to be a bit careful – instead of
if node.bgp.anycast:(which would work even if the node had no BGP parameters), I decided to do a check that would never auto-create empty data structures.
- If the node.bgp.anycast attribute is set, it’s safe to work with BGP parameters, so we can assume that we can set advertise_loopback and that the node has a list of BGP neighbors in node.bgp.neighbors.
- Every BGP session described in node.bgp.neighbors includes session type in type attribute, and the value of that parameter could be
ibgpor
ebgp. The list comprehension I used selects all list elements that are not IBGP sessions.
Want to do something similar? It’s not too hard once understand the data structures used by netsim-tools. Here’s how you can get there:
- Build a lab topology
- Run
netlab create -o yamlto get the final data model. You can also limit the output to individual components of the data model.
- Explore the data structures and figure out what needs to be modified.
You can also open a discussion in netsim-tools GitHub repository or ask a question in netsim-tools channel in networktocode Slack team and we’ll do our best to help you. | https://blog.ipspace.net/2022/01/netsim-plugins.html?utm_source=atom_feed | CC-MAIN-2022-21 | refinedweb | 880 | 64.2 |
When Worlds Collide: The New Dot-Biz And The Old 130
angkor writes: "It seems the new dot biz domain conflicts with domains registered in an alternative root system." This is where all the alternative root servers conflict with the (ahem) interesting name choices made by the ICANN board.
I vote for ORSC instead of ICANN... (Score:1)
But seriously, we do need a centralized authority to handle domains and prevent collisions. ORSC, unlike ICANN, isn't even discussing the goofy idea of letting registrars censor sites according to their content.
I've got it! Someday in the distant future when the elected ICANN board members get to vote, Karl Auerbach should make a motion to dissolve ICANN and hand over the keys to ORSC.
--
Re:The TLDs we really need (Score:2)
That's exactly what will solve this problem: submission of your site to a quasi-governemental board so that they can judge your content and "allow" you to keep your domain name.
Why do you care if amazon.com takes amazon.person? Is that the first place you're going to look for it? Is that the first place your mom's going to look for it? Or are you going to go to the address that they've spent millions establishing a brand name for? Let them waste their money locking up every possible TLD, they'll just go bankrupt more quickly and eventually they'll all free up again.
I think in the end, ICANN is either going to radically change their policies or become irrelevant. Sooner or later, someone big will become feed-up enough or greedy enough to bolt out of the current DNS structure and that's the end of ICANN. There will probably be some confusing times ahead for DNS resolution, but trying to maintain an artificial scarcity in TLD's isn't doing anyone any good.
I'm going to go to work tomorrow and find some alternate DNS roots to put in our root server list. It's unlikely anyone will even notice, but I might sleep better.
What the FORK?! (Score:1)
Re:Why only TLA TLDs? (Score:2)
Re:Don't do it! (Score:1)
Thanks Dave.
;] For what it's worth, you've put up a great argument as well, although I'm still sticking to my guns.
You're definitely right, DNS is at the center of the Internet practically. It must be dependable. But I've done a lot of research on this, and I really don't think it's going to break by having alternative roots exist and grow in popularity. I'm not going to deny that it would be much better if ICANN would release more TLDs on a regular basis instead, but they won't. Now, if alternative roots could be able to support both root structures at the same time and handle conflicts (ex: the
.biz problem in the topic here), then that would make a transition that much easier. This solution [opendnstech.com] is the next generation of alternatives that can do that.
I think something is going to happen real soon. We'll see DNS change in a big way.
Chris
Open DNS Technologies [opendnstech.com]
Before.. (Score:2)
1) They *are* a business
2) They operate by having root servers that pass queries back to the 'standard' root servers.
So.. if they come out with
Don't fear a fractured root (Score:1)
Likewise, the different root server systems all carry
.com .net .org and the rest. Some of them carry each other's TLDs, some don't. Some are trying to become commercial operations, others are trying to promote more "public access" channels.
At some ISPs, you can't get the alt.sex.* newsgroups; at others, you can't get de.*. At still others, you might wish they'd get rid of all the foreign-language groups that are cluttering up the list. But this hasn't done any harm to Usenet, and users are free to go to other providers of Usenet to get their missing newsgroups. Usenet is a better service, I contend, because it has fractured. There's a huge diversity of groups to choose from, and if what you're looking for isn't carried at your ISP, you can find other sources.
Likewise, I see little harm, and much benefit, from allowing - even promoting - the root to fracture. Why shouldn't ISPs in Texas band together and offer
.texas to their customers? What does it matter if some other group is offering a different .texas without the regional focus? It isn't the .texas that the Texans are interested in! The EU doesn't need to wait for ICANN to approve .eu, they can go ahead and create it now - all they need is a consensus among European network providers and operators and it's a done deal. What does it matter if AOL and Earthlink aren't offering their users access to .eu delegations? If the demand is there, they will; if it isn't, who cares? Some Europeans may wish to keep the Yanks out of their TLD.
Every argument I have heard for avoiding a fractured root, whether the proponent will admit it or not, eventually leads back to the desire for control, power, money. A fractured root works counter to this desire, and that's not a bad thing.
Re:Let's DO something! (Score:1)
I propose a motion to table a discussion, after the proper formalities have been passed, at the earliest possible window, circumstances permiting.
Re:Just another scam.... (Score:1)
Re:brooklynbridge.biz (Score:1)
Walt
Re:The TLDs we really need (Score:1)
There are ports other than 80...
--
Re:Alternative root systems (Score:1)
mod this guy up!
eudas
Recount! (Score:3)
Re:Alternative root systems (Score:2)
Try going to chatyahoo.com [chatyahoo.com] or mailyahoo.com [mailyahoo.com].
I wonder what other addresses has yahoo taken for itself !!
Ban Network Solutions (Score:2)
How to see the alt root servers (Score:2)
anyway is it worth it to add these alt root servers to dns?
internationally shared resources (Score:2)
I can't think of any other resources quite like this. The only thing close is space, and because of the dificulties of just getting there, so little has been used that there have not been any conflicts, (no big ones anyway).
Every other resource I can think of, from radio waves to minerals, are owned by the country they reside in. In this case, no single gov't can control the resource, since it resides everywhere.
The problem with ICAAN is that it has no real authority. They can decree whatever TLDs they want, and set up their nameservers. So can anyone else. ICAAN is well known, but really has no more authority than anyone else...
I hesitate to say that we should have an internationally sanctioned body governing this, but without one, this type of stuff is bound to happen. All part of the internet's growing pains, I suppose.
Re:.pro (Score:3)
From the Tidbits Newsletter [tidbits.com]:
Re:Fox Special: When Lawyers Attack (Score:1)
What a scam!
Re:good luck with it (Score:1)
Re:uh...so? (Score:1)
The US Government.
alternative roots (Score:1)
Re:I vote for ORSC instead of ICANN... (Score:1)
By-the-way, I been using the ORSC root system for my own systems for a couple of years.
I've also got cavebear.web in the IOD registry.
As an experiment I recently created
.ewe in the ORSC root (and others) so that if I ever get a few cycles I may build an anonymous registration system - a person would submit a ((name),(list-of-dns-servers)) tuple and if (name) isn't yet taken the system would add the name and serverlist to the zone file and return a management key that can be used to perform updates. All information about the source of the registration will be tossed - if one wants to find out who is running it somebody will have to contact the owners of the machines at the IP addresses in the server list. As for garbage collection - If no queries for (name) are detected for some period of time - say 90 days - then the name and its list of servers (and the key) will be silently dropped..
Re:Don't do it! (Score:1)
I rather disagree.
Competing root systems will no more damage the net than competing telephone number lookup mechanisms damage the telephone system.
When there are inconsistencies, users will chose with their feet whether to continue to use a name service that doesn't give 'em answers that meet with their expectations.
To my mind it is better to empower the users with a choice, even at the cost of some hypothetical inconsistencies, than to create a worldwide bureaucracy that forces all users to march to the drumbeat of the marketeer with the biggest budget.
Take a look at
m #multiple_roots [cavebear.com]
Sure there are some potential problems - NS and CNAME records written in one TLD context and resolved in another, web caches that stupidly re-resolve DNS names in URLs rather than using the IP address of the TCP/HTTP connection they intercepted, etc. But I'd happily trade-in a worldwide bureaucracy in return for a couple of repairable technical glitches.
tld 'dot extension' (Score:1)
What a shock... (Score:2)
Kierthos
Re:What about non-roman character TLDs? (Score:1)
The IETF is working on this and has been for a while. It's a lot more difficult than one might think.
From the technical point of view it's a choice between somehow encoding non-ASCII character sets into the limited character set of DNS "hostnames". (DNS itself is supposed to be 8-bit clean but there is an ancient limitation called "hostnames" that imposes an alphanumeric plus hyphen character set.) The problem with this approach is that in some languages the size of the names becomes limited because the 63 octets per label get consumed pretty quickly when it takes two or three of 'em to encode a character. The speakers of those languages, understandably, feel that they are getting the leftover after the western nations get the good stuff.
The other approach is to actually modify the DNS protocols. There are some bit patterns in the length octet of the DNS label to indicate that a whole new label length/encoding mechanism is in place. The concern about this approach is what happens when these packets flow through existing resolvers and through so called "transparent" devices (firewalls, NATs, web caches, etc) that tend to futz with DNS packets.
NSI is seeing big $$ in all of this and has established an early registration system, oops I mean "testbed" so you can register your internationalized name even though there is no protocol support yet. This "testbed" is up and running now.
Interesting Link (Score:3)
Mass Debate [mass-debate.net]
It mentiones balkanization (Score:1)
domain disputes, alternate systems, now this! (Score:1)
ergh, wait, I guess that's what some people apparently tried to do....
Alternative root systems (Score:1)...
Just another scam.... (Score:2)
for god sakes give it up! (Score:1)
websites, hopping down 10 links you should be able to find what you need, and book mark it!
In fact I propose that the generic domain names should be banned. Because internet is linking entrie world together, for greedy corporation it is a plus to have a generic site, like
company has an office in county, and just mails stuff out to the rest of the world, so make it have name amazon.ta.ws.us. Such the scrapping for stupid domain names should stop. Only govermental organization get to make second level domain names, such as
Re:The TLDs we really need (Score:1)
Re:.biz? (Score:1)
Ownership (Score:1)
Does ICANN own ICANN.org, or can I steal it now??
Too many of them? Peace amongst them. (Score:1)
how do you activate all of the alternate ones? Do you have to manually mesh together bind files or is it just a matter of correct configs?
Re:free TLDs will solve name problems (Score:1)
Until recently, US taxpayers footed the bill via USG funding of all of that. NSI was paid for it until they started charging for names.
You know, it's not the names you are paying for. It's the registration services.
It's never been "free." Every US citizen has paid for it all along. We are still paying for much of it.
BTW,
Re:hmmm... (Score:1)
No secrets.
Re:Don't fear a fractured root (Score:1)
The root has really always been fractured in the manner you describe, but it is "private" roots which would most likely carry duplicate TLDs, not those which are designed to be public.
If those whose purpose is to provide a public root, there really needs to be an agreement that there won't be colliding TLDs. That way, the user knows that the "basic" channels are consistent. That means that if they carry the USG root as a subset of thier own, that there won't be colliding TLDs initiated by that "basic" channel, i.e. ICANN. By the same token, the roots would not introduce their own versions of
It's a simple concept which ICANN/DoC and the TM lobby do NOT want to see. It would erode their power base because the other roots are not forced to adhere to ICANN policies and rules, such as UDRP, sunrise provisions, etc.
Private, local roots can carry whatever TLDs they like, and always have. There is no commitment to the public, so they are free to structure the system in any way they choose. Nothing wrong with that. Corporations do it every day.
Re:Before.. (Score:1)
Just because a TLD chooses not to ask for inclusion in the legacy root, it does not mean they do not have the right to exist or that the registrants don't have the right to their domain names and have them resolve properly.
Why ICANN wants to fracture the net in this fashion is really clear. Kill any competition to the power they want to reinforce. Competition may render them less effectual.
Re:a suggestion on an open dns system (Score:1)
In a utopian world, ICANN/DoC would stick to the technical and leave the policy to the free market. Fat chance. So we have root systems that do exactly what was intended in the first place. Simple. And it's not going to stop with the few roots that are out there now.
Re:openness vs. universality (Score:1)
Re:I used to think it would be a good idea.. (Score:1)
The process of changing the DNS servers is the same as setting up for an ISP, except that if you are already set up, it is one little change. No big deal. If you change ISPs and that ISP does not point to the servers you need, just use SETDNS again. If you want to change back, it's just as simple.
Re:Understand DNS? (Score:1)
ICANN could change that need, but they won't unless the BoD changes severely. It's political more than it is technical, unfortunately, and that means it's controlled by very deep pockets.
Re:.pro (Score:2)
Re:Let's DO something! (Score:1)
That is why ICANN knows it can act with impunity and dictate its terms to people.
Re: Alternative root systems (Score:1)
Lots, according to Netcraft [195.92.95.5].
Granted, not all of these are owned by Yahoo, but there are plenty more like the two the previous poster mentioned.
--
Turn on, log in, burn out...
Understand DNS? (Score:1)
Doesn't a client just talk to a DNS server that translates an address to an ip? So wouldn't the problem actually be with the DNS servers not being correctly configured?
Here is some info from biztld.net [biztld.net]:
If your ISP has not yet upgraded their domain servers from the ICANN Legacy Namespace to the ORSC INCLUSIVE NAMESPACE Supported by The PacificRoot, you may not be able to resolve many of the new Internet domain names currently being activated. If that is the case, you will need to Upgrade your DNS here.
I wish people who wrote articles had a clue.
Re:Let's DO something! (Score:1)
ISPs are, in fact, beginning to offer the augmented roots. Since the augmented roots also carry the USG root TLDs, neither ISPs nor users lose anything at all. As ISPs, you are just carriers. You choose which usenet feeds you wish to carry, so why not rootzones?
That, people, is a free market as opposed to an imperialist dictatorship. Lawsuits are not going to be any more or less frequent because you can see more TLDs. You carry the "programming" you choose to carry. You offer a value added service at no extra charge.
If you choose ORSC, great. If you choose PacificRoot, great. Superroot, great. They all carry each others' rootzones plus the USG rootzone. Do you have anything to lose? No. Do your users gain? Yes.
Re:Area Codes, Zip Codes & TLDs - oh my! (Score:1)
aaarrrrggghhhh..... (Score:1)
Re:Alternative root systems (Score:2)
Re:Don't do it! (Score:1)
If that were true, I'd probably be first in line to agree with you. However, it is just not the case.
.biz has been in the ORSC rootzone for years. We began operating it earlier this year because it had lay fallow for two years. In any case, it was functional well before any announcements by ICANN or their applicants and we were really shocked to see that ICANN would accept applications for any existing TLD. .biz has a history. There were several "suggestions" for it on Jon Postel's list.
If ICANN had selected
.EVENT, that TLD would also have represented a collision with an existing TLD. The same would be true for several others included in applications, including .home. It just so happens they chose .biz. We certainly did not expect or hope to be the poster child for existing TLDs. It just worked out that way.
ORSC has been around since '96, well before ICANN.
.biz is as old as ORSC.
This was not a pre-emptive strike and ICANN never never entered into it. We also have
.online, .etc, .npo (restricted) and .ngo (restricted).
I would truly appreciate seeing the truth printed. I also have no objection whatever to seeing comments to the contrary, as this brings out the obvious need for communication and education.
There are over one hundred TLDs in the ORSC rootzone. They should be respected and not duplicated. It is not our
.biz which is the culprit here, and we have NEVER made any claim to application to ICANN or decieved anyone. Our TLD resolves to the ORSC rootzone, not the USG root.
If you have negative comments, that's fine. I would simply ask that they be accurate, okay?
:)
BTW, it is our intention to be fair to everyone. We do not favor control by any faction and open registration on FCFS basis is the rule.
I,personally don't think that "sunrise" provisions and UDRP are better proposals for internet users, especially individual domain name holders. If you have any questions about where my head is, go to Tldlobby.com [tldlobby.com] I believe in domain name holders rights, but adhere to the fact that domain names are not property. DN holders should, however, feel a measure of security in their registrations and not have to fear a theft by SWIPO. We do not adhere to a UDRP. Our DDRP pretty much states that it is not our place to judge whether there is any infringement on anyone's rights. We will cancel a registration by court order only. Law is law. UDRP is NOT law.
-Leah-
Re:Don't do it! (Score:2)
Conceded. I went too far with that comment; thank you for the measured response. There's a lot of FUD about, I really shouldn't be adding to it.
Best,
Dave
--
Re:ICANN: server only computer-illiterate folk? (Score:1)
If TLD spectrum is infinite than the
.com spectrum is also infinite.
If we already have infinite space then why do we need more TLD's?
If we are woried about existing TLD's filling up, then shouldn't that tell us that we should be worried about the top level also filling up?
A possible solution (Score:1)
Re:Alternative root systems (Score:1)
Ford requests cars.com
ICANN grants the request because there is no cars.com registered.
Chevy requests cars.com
ICANN creates a web page at a web site they host named cars.com, which has links to both ford.cars.com & chevy.cars.com, revokes the cars.com url from ford, & grants ford.cars.com to Ford, and chevy.cars.com to Chevy.
Here it might be reasonable to order the links on the redirection page in order of their popularity.
Both (Score:1)
Ill set a proxy server to do dns looks of names for a set of root dns servers, and then respond with links from each.
Pick your site.
youcan.here
Open Root @ Address: 199.166.24.43
ICANN Root
Not found...
Then I can link to everyones root servers. God help me if there is more than 5 root servers with the same name...
-Bro.
Re:Let's DO something! (Score:1)
Re:Mozilla To The Rescue? (Score:2)
Network apps do not query DNS servers directly (typically, though there are exceptions). Most simply make calls to the OS's resolver, which then forwards requests to the primary DNS server, which then queries a root DNS server.
So, adding code into Mozilla to use alternate root servers would simply be a waste of time and space.
Hacking BIND would be the way to go. You could have bind check to see which kind of TLD is being requested (official or alternate) and then have it query whichever root server. However, the same problems that are associated with having an alternate DNS system are still present (collisions, etc...).
I used to think it would be a good idea.. (Score:2)
But now that I've spend a good two minutes looking around youcann.net, I don't want to bother mucking with config files just to see "the rest of the web".. and I can imagine a future where we'll see links like "Click _here_, and BTW, you have to connect to Misc. Nameserver XYZ to get there.." ugh.
Perhaps if as well as nameservers, we could have nameserver servers.. ugh.
brooklynbridge.biz (Score:2)
People would have had to be fools to register with an alterate root server that had little chance of ever becoming official.
This guy (the guy who ran the alternate
Re:Alternative root systems (Score:2)
It all comes down to lack of proper management by the registrars. At first companies started registering product names, then
Added to which a fair chunk of
Movie labels are awful about this. Do we really need somemovie.com/.net/.org? Why can't we have somemovie.sony.com? or someothermovie.newline.com, and so on?
It's an extension of registering brand names. An alternative would be somemovie.films.ent.
Really someone should have said a long time ago something to the effect of "if you want a
If only dot-commers would pull their heads out their asses just long enough to see what the hell is going on, we might not have any need for new TLDs (well, yet, anyway).
New TLDs could be useful, but they need to be the right TLDs and properly registered. None of this trying to register
UDRP (Score:1)
Yet when others do, they use UDRP.
Confirms what I say about them on [wipo.org.uk]
Internet2 (or something) (Score:1)
--
a suggestion on an open dns system (Score:1)
Why should two companies squabble over amazon.biz if they could equally well register amazon.business, amazon.store, amazon.shop, amazon.books, whatever. Given an unlimited number of potential TLDs, then such activities as domain squatting would be meaningless. And it would be stupid to try to sue everybody who had a domain *.amazon.* in a world of unlimited TLD space.
All the more reason to have an open DNS architecture, and get rid of these hopelessly ridicuulous moderation bodies like ICANN and WIPO. Changing the way the internet is used doesn't require legal battles or desperate struggles with any of these organizations - all it requires is altering your dns records. That's it. If enough people did this, then the internet would be structured differently by the defacto use of its participants.
So coming down to the nuts and bolts of how does one manage resolution of domains in an unlimitedly large name space? I would see it as the same as we manage usenet - or something analogous to that. Anyone who wants to maintain an as-of-yet unmaintained TLD puts up a server. Or in the case of popular ones like
In the extreme case that there are two different databases for the same TLD with conflicting entries for the same domain name - we let the user decide which one they want to use, and allow them to make custom macros to the others.
For example if you have a collision between foo.bar in one
foo.bar^2
foo.bar^3
foo.bar^junksite
foo.bar^coolstuffhere
( the caret ^ is an invalid dns name character anyway )
What do people think about this?
Re:for god sakes give it up! (Score:2)
.com and for that matter
Re:Follow the money (Score:2)
Rather this is the way they are being seen by ICANN. It certainly isn't the only way to see them.:Don't do it! (Score:1)
This may be coming a little late, since this topic has gotten old, but...
While I am all for alternative roots existing as is evident in my many posts on the topic, I do not think that ICANN (or anyone operating root servers to any DNS) has to respect alternates' TLDs. Yes, that would be great for the alternate - it would save a lot of customer frustration, but just because you made it, doesn't mean you can claim ownership to it. Until you have become the standard DNS - grabbing most of the world population on your root servers, your TLDs don't mean anything to anyone in the world other than your customers. Any attempt at a cease and desist would never hold up in court.
That said, I think ICANN did make a mistake in choosing
.biz . If not for the fact that it is too informal, but for the fact that it is in large use already.
Chris
Open DNS Technologies, Inc. [opendnstech.com]
Re:good luck with it (Score:1)
Yeah, not a bad idea from a neat-freak no-namesolution-pollution perspective, but it's not a grea idea from a practical standpoint. Is someone going to remember movie1.newline.com? No, they will remember movie1.com. People don't ask each other, "Hey, did you see that great new movie from New Line Cinema?" Unless it's a Disney or Lucasfilm movie, most people won't know what studio a movie came from, and probably won't remember if you told them.
Re:ICANN: server only computer-illiterate folk? (Score:2)
As opposed to having every form of RF transmission known to man on the same band. Which would be a closer analogy with the current domain system. Effectivly
Pollution from one 'domain' into another is very easy to happen here, as the
No the
The cause of the kind of "pollution" is gross mis-managment by existing registrars (NSI especially) and by implication ICANN..
The existing
Re:Interesting Link (Score:2)
There are two sets of country TLDs with the same "sickness" as
The first set, e.g.
Maybe the Canadians ane Irish are happy about being!
:-)
--
Let them apply for a top-level domain (Score:2)
That said, I think they should also go away. The have good ideas, but I think they should try to influence ICANN instead of creating a rouge system.
Here's a brief history of DNS, as made-up by me (i.e. I'm doing some guessing, but it seems reasonable):
When the Internet was new, all people had were IP addresses. Having to remember many of these got to be a pain, so they assigned names to each computer and kept them all in the eqivalent of a hosts file on each computer. Maintaining and updating this file got to be a pain as the Internet got larger. Modern DNS was born to solve this. Root servers, each organization responsible for it's own namespace, etc.
DNS is a names-to-numbers system for the Internet! People are treating it like a keyword system for the WWW. It's not. If you want one of those, by all means, make one. Or just use Yahoo or something. Don't try to use DNS, because everything ends up as and, which is pointless - and has polluted the
.com TLD.
Back to my best guess of history: Our current TLD make sense to me, given the history of the Internet.
What new groups have joined the Internet? I see a need for a TLD for individuals, maybe
.per (personal) or .idv (individual). I don't quite know how to resolve the dispute when everyone wants johnsmith.per, though.
If we want to stick with the current TLD, we should enforce, somehow, their correct use. The rules should be strict enough that most organizaions will fit into only one TLD. None of this grabbing foobar.* .
New TLDs need only be as broad as the old. I mean, what is
.museum compared to .com or .org?? The idea of .misc is interesting, but that just encourages the "keyword" behavior. Maybe it could be .keyword if we really can't do without it.
Or else we could scrap the current TLDs. I like that idea too. Make it all usenet-style too.:2)
Area Codes, Zip Codes & TLDs - oh my! (Score:2)
humor for the clinically insane [mikegallay.com]
Re:Area Codes, Zip Codes & TLDs - oh my! (Score:2)
This is more a reaction to the pollution of
he general public doesn't care how much you paid for your domain, they just know that zip codes like 90210 are more popular than 34546, and area codes like 212 are more popular than 618, and yes, TLDs like
Postal codes and telephone numbers are geographic, if you want a completly different one then you either move or get your post forwarded/get an out of area phone line. Anyway I'm sure if there was a soap opera called "XYZ 34546" then that number would become very popular..
Re:The TLDs we really need (Score:2)
You missed the obvious one here, could have something like "music.madonna.music.ent"..
The assumption that a domain implies website is flawed. Also the idea is too complex and time consuming. Far better IMHO to have a working complaints procedure. e.g. someone could simply email to complaints@register.biz a message to the effect that does not appear to be a legitimate business/has ceased trading/etc.
Re:The TLDs we really need (Score:2)
Depends if there is someone called "amazon", he or more likely she is likely to be none too happy about some bookseller using their name.
Re:.pro (Score:2)
Probably not the "professionals" who qualify for it twice, given that it is also an abbreviation of their profession.
good luck with it (Score:2)
First, what's easier to remember:
Say I wanted to host a porn site that featured nothing but really, freakishly tall women from all over the world, and I was going to call it something really witty like Amazon Nudes. By your reasoning, instead of going for, I should go for amazon.nudes.com, or nudes.amazon.com.
Assuming nudes.com exists, I might be able to work out an agreement with them. But I'm at their mercy, and have to pay what THEY ask for. My own
.com name might be cheaper, and easier to remember. As for nudes.amazon.com... well, let's just say I don't like lawyers, and wouldn't invite them in by even TRYING to do that.
"There's a party," she said,
"We'll sing and we'll dance,
It's come as you are."!
.com vs other TLDs (Score:2)
Frankly I think it says you are late to the boat and are lucky to jump on.
Now with
Business-wise this is not something I respect..
The TLDs we really need (Score:2)
.sex,
.person -- personal webpages
.tld -- propose new TLDs to ICANN as needed, plus link to registrars who offer existing TLDs.
.ent -- entertainment sites (for movies and games)
I know there are others that can be added to this list, so feel free to suggest more.. If the registrant's application is rejected, the registrar would then suggest a more appropriate TLD for the site to use, and ask the registrant if he/she would like that TLD instead. For example, Amazon.com would be allowed to keep
Re:Mozilla To The Rescue? (Score:2)
The reason web browsers don't have DNS stuff is that it would just be unnecessary bloat. It doesn't belong in the web browser; it belongs in the TCP/IP stack. The web browser (probably) isn't the only program you run on your computer that needs to look up addresses. If the Mozilla team put DNS stuff into their project, everyone would rightfully laugh at them.
---
Re:Don't do it! (Score:2)
Hi Chris,
Damn good response. There is a lot to think about in there. To respond individually will take time (I'd be glad to do so in email if you wish) and I think this story will have dropped off the front page by the time I'm done, so with your permission I'm going to concentrate on this:
I think this is at the centre of our differing opinions. Yes, I think the internet needs to evolve. No, I don't think you're doing so in the right way.
I like how the internet has evolved over time. I think it's fascinating to see a system built on real consensus come so far and overcome so many obstacles. I think it's amazing; there are lessons to be learned here in how we live the rest of our lives.
I don't think alternate roots are a part of this process - not yet, at any rate. DNS is a rigid hierarchy, and as a result, it's brittle. Conflicts must inevitably arise as we are beginning to see with the alternate
.BIZ domains. I think this undermines the usefulness of DNS, and since we have never had to live with an inconsistent DNS before, I don't think the potential consequences are clearly understood.
I can't buy your assertion that DNS can never be replaced with proprietary solutions. These guys are masters of embrace+extend. It wouldn't happen overnight, but the prize of dominance is high - you become the de facto DNS replacement.
If this were any other protocol, big deal. But DNS is fundamental to the internet. When someone reports a network problem, it is the one thing I always check first, because it's the one ubiquitous protocol. I believe that:
I've been told "don't hinder progress" a lot. I've seen it used to justify everything from HTML news posts to
.DOC as an email standard. That alternate roots exist is not just an indication that there is something wrong with the current system; I think it's a tragedy. But of all the alternate people who could hold the root - and in a world of 90pc Windows desktops, I do believe there'll eventually be only one root that matters - I have more faith in ICANN and its structures to ensure the continued life of a consensus-built net than I do anyone else.
There's been a lot of frustration at ICANN I think because of the appointed directors. That is changing. Much of the board is different, and what's left are also changing. The At Large directors are taking their seats, and more will be elected (it won't be limited to five as previous slashdot stories may have had you believe). There are a lot of good people on that board now, and not just on the At Large side.
And if we shouldn't give those guys a chance at making it work, who the hell else should we trust?
Your reasoning is excellent, Chris, and I wish I could be as optimistic as you. But I fear that in the root DNS we have encountered one of the potential vulnerable spots of the whole internet, and right now I do not wish to take the risk of breaking it.
Dave
--
Re:Mozilla To The Rescue? (Score:2)
OK then, don't hack Mozilla, hack Wsock32.dll. That's supposedly not too hard to do, although I've never had a reason to try it myself. I'm pretty sure there are some instructions on how to hack this DLL on the 'net.
Thanks for all the "bind" stuff, but it's totally useless on Windows.
Do it! But please consider Opennic root servers (Score:2)
I do think for myself, as do most people on the net with the expertise and clout to choose their own root servers.
Forking is a grand tradition of the internet. Disagreement and chosing one's own path is inherent in the very philosophy behind much of the internet.
What the ICANN is engaged in is a profound usurping of the open and free nature of the net and a powergrab of megalomaniacal portions, and should be resisted and fought by all good people everywhere.
Six months ago I changed my employer's root servers to point to opennic [unrated.net]. I saw what ICANN was becoming then and chose not to wait until the proverbial fertilizer struck the rotating blades, but rather to act proactively.
I must say I have been impressed at how well opennic does work. Not a single DNS problem or complaint in six months, and name resolution times that are actually more snappy than before.
From a political/freedom point of view Opennic is good in that it is truly democratic, supports both the alternic and icann namespaces (sans the new domains), as well as democraticly created TLDs of its own.
I encourage others to take a look-see. It is my hope that FreeNet's pending naming/key service will allow us to dump DNS altogether, but until that happens opennic is at least open, fair, and democratic, unlike ICANN and many of its corporate rivals.
And so what if the internet becomes fragmented? Worst case, we can send each other our IP addresses in the exact same way we share phone numbers today. More likely, such fragmentation would take the wind out of the sails of such entities as ICANN, preventing both their power grab from succeeding and perhaps pre-empting similarly inappropriate powergrabs in the future and leading to some kind of reasonable and equitable compromise. Do you really think entities such as ICANN and NSI would compromise in any fashion otherwise? Based on their behavior to date, not bloody likely.
With any luck we'll be able to replace the heirarchical, centrally controlled DNS namespace with something less prone to corruption and domination, such as that being proposed by FreeNet. Until then, please consider opennic as a free, democratic alternative to ICANN and Alternnic.
.biz? (Score:2)
What is someone in another country going to think when they look at the word "business" and look at ".biz" and wonder where the hell the "z" came from?
! | https://slashdot.org/story/00/11/28/030209/when-worlds-collide-the-new-dot-biz-and-the-old | CC-MAIN-2018-26 | refinedweb | 6,767 | 73.68 |
perl -e "use Some::Module";
[download]
My thought is to put something in a do or eval. What would be the best way to handle this task?
use Module::Locate 'locate';
print "Couldn't find Some::Module"
unless locate 'Some::Module';
[download]
_________broquaint
use Class::Inspector;
if ( Class::Inspector->installed($module) ) {
...
}
[download]
eval { require Some::Module; };
if ($@) {
print "Can't find Some::Module\n";
}
[download]
If you want to "use" a module at runtime, you can eval a quoted string:
eval "use $mymodule";<br>print $@ if $@;
That's funny, I was just doing a little light reading on Checking to see if a particular Module is installed when your question, Testing for a module's presence appeared on Newest Nodes. Deja vu. If you haven't already, you might want to try Super Search for additional discussions on this from the past.
Perhaps the problem is twofold: Perlmonks doesn't group questions in SOPW. Everything is thrown into one big fat section. This makes searching more difficult. The other problem is the English language. There are so many synonyms like "test" and "check".
I used super search and googled thepen. The problem with such a search is the keywords: I used "test" and not "check," and "presence" or "exists" and not "installed".
... which is why I said "*If* you haven't already" tried supersearch ... just like you, I noticed the keyword disparity and I did not automatically assume you had overlooked supersearch.
As far as your observations on SOPW organization and English synonyms, my initial response to you was specifically intended to address those (percieved) problems.
How?
Well you see, now there is a new node that contains *all* of the different keywords in one place. It patches the 'keyword gap'. That's how humans like you and me can help make SuperSearch and Google even better.
Of course there are many other synonyms out there that remain "unconnected", but that's the name of the game ...
Wash, Rinse, Repeat.
I usually do something like this early on in my code:
my $HAVE_MODULE = eval 'require Some::Module; 1' ? 1 : 0;
[download]
if ($HAVE_MODULE) {
do_stuff();
} else {
do_something_else();
}
[download]
#!/usr/bin/perl
use warnings;
use strict;
use POSIX qw(_exit);
foreach my $mod (qw(CPAN No::Such IO::Socket Not::Installed))
{
printf "%20s: %-3s\n",$mod,is_module_installed($mod)?"Yes":"No";
}
[download]
sub is_module_installed
{
my($mod)=@_;
return (system("perl -M$mod -e 1 2>/dev/null") == 0);
}
[download]
sub is_module_installed
{
my($mod)=@_;
defined(my $pid = fork)
or die "Fork error: $!\n";
if ($pid)
{
# Parent
waitpid $pid,0;
return $? == 0;
}
else
{
# Child
close(STDERR);
eval "use $mod;";
_exit($@?1:0);
}
}
[download]
You could make it a little faster usingfork, to avoid firing up a new interpreter from scratch.
Updated: Tanktalus points out that the copy of perl that system('perl ...') will start may not be the same as the running one; fork will use the currently running interpreter in a new process, so avoids that problem. It should also be a bit faster.
Updated: Fixed Tanktalus' name in previous update. :) Also addressed his concerns about atexit using POSIX::_exit.
We take some risks here either way. "perl" may not be the currently running perl (here, for example, "perl" is the stock perl that comes with the Linux distro I'm running - 5.8.0, while "perl5.8" is a symlink to the latest perl5.8 binary I've compiled - 5.8.5), while using $^X may also not be quite useful (since the currently executing code may be running in an embedded perl rather than a standalone perl executable - isn't that how mod_perl works?).
Personally, I'd just use require as others have pointed out. And if you don't want to use the extra memory, you can delete it from %INC afterwards - perl will then be able to re-use that memory. It does mean that you'll get a negative when the module exists and is found, but doesn't compile, but that's probably the same thing as not being there, really.
Memory is an issue! Can you show me how to delete it from %INC after I required the module in an eval? Thanks.
(Ignoring the typo of missing the 'n' in my alias ;->)
While fork can work great on Unix, it's not quite so great on other platforms. EMX on OS/2 (and maybe Win32) handles fork - but it's not really faster. I doubt ActiveState or Cygwin perls handle it (if either do, it'll be Cygwin). So you've simply hit another gotcha: cross-platform compatability. (Traversing @INC and using File::Spec is fast, tight on memory, and completely cross-platform!)
Even on Unix, fork may not be that great. Imagine an embedded perl. The main process (the embedder) has some cleanup in an atexit() in C. That cleanup may include committing or rolling back transactions, deleting temporary files that were in use, or other such behaviour. When you fork, and then the child exits, the atexit handler kicks in and does something like this - and now the parent process is going to be in a wierd place that is going to be really painful to debug. Especially if this ends up on a user's machine and the user imported your module. The C developer has no idea what is causing it, the perl user has no idea, and you're not really involved. Dangerous!
Fork is a dangerous tool - although it can be useful, you have to be really really careful of when you use it, and how you use it. Let's just stick to searching @INC. It's fewer lines of code, too ;-}
In the face of possible changes to the module search algorithm, actually asking Perl to find the library and say whether it worked or not will always work, while re-implementing the library search yourself will only work for as long as nothing changes.
Using a module designed to do this, as others have suggested here, is a good compromise.
As for the cross-platform concerns, I have used fork on ActiveState Windows; it seems like a lot would break without a working copy of this function. And POSIX::_exit seems to avoid your other concerns.
Also, to clarify, I don't think using fork will be particularly fast; just faster than using system, which was my original proposal.
But:
% perldoc perlfork
...
AUTHOR
Support for concurrent interpreters and the fork() emulation wa
+s implemented by ActiveState, with funding from Microsoft Corporation
+.
This document is authored and maintained by Gurusamy Sarathy <.
+.@activestate.com>.
...
[download]
#>pkg_info | grep -i "MODULE"
[download]
If you try to eval "use Some::Module" then the module will be loaded and any side-effects it has (such as exporting) will happen.
I suppose if you were feeling particularly crazy you could try something like this ...
print 'the module exists' if(!system("perl -MSome::Module -e exit"));
[download] | http://www.perlmonks.org/?node_id=428412 | CC-MAIN-2015-48 | refinedweb | 1,153 | 64.3 |
With the large volume of games now being released on the App Store and Google Play, cool names for games are going to get harder and harder to come by. And it’s easy to see why you need a cool name for your game.
The name you choose will have a massive impact on how you market your game. It’s one of the most important decisions you’ll make before release.
When naming your game, a few simple concepts can help you get it right. Along with these guidelines, there are also a number of tools and methods to help you pick the best game name.
Whatever method you choose, the end result is what matters most. The name you pick will form a large part of a player’s first impression of your game.
And in today’s market, that first impression needs to have a bigger impact than your competitor’s. With so much at stake, can you afford to leave your game’s name to chance?
How to Make your own Games with Felgo
Before we dive deep into the matter, we wanted to show you how you can create your own game with Felgo. This small code example is a random game name generator that you can try right away!
import Felgo 3.0 import QtQuick 2.0 GameWindow { id: gameWindow Scene { id: scene Rectangle { anchors.fill: parent.gameWindowAnchorItem color: "white" } Column { anchors.centerIn: parent spacing: 10 Text { id: gameName anchors.horizontalCenter: parent.horizontalCenter text: "Click Generate!" font.pixelSize: 20 } StyledButton { anchors.horizontalCenter: parent.horizontalCenter text: "Generate" onClicked: { gameName.text = randomName() + " " + randomName() } } } } function randomName() { var names = ["Awesome", "Clash", "Flappy", "Brutal", "Candy", "War", "Crush", "Adventure"] var randomIndex = Math.floor(utils.generateRandomValueBetween(0, names.length)) return names[randomIndex] } }
Do you need help with launching your new mobile game? Book a session with Felgo’s experts and bring your project to life!
Download Felgo Now and Create Your Own Games!
Why is it Important to Use Cool Names for Games?
In the words of Peter Main, a former Nintendo of America executive:
“The name of the game is the game.”
Peter Main had a background in the food industry before joining Nintendo of America. This lack of game industry knowledge didn’t stop him from spearheading Nintendo of America’s early success.
He understood that for Nintendo to thrive, they’d have to produce high quality games that excited players. Cool game names were one of the things that excited players.
In his mind, the names of the titles they produced were just as important as game play or story line. Game names had to capture the imagination of the public, just like movie titles. Names also had to be memorable and unique so people could ask for them in stores. In essence, cool game names were a cornerstone of Nintendo’s user acquisition strategy.
It was this approach that won Peter Main a marketer of the year award in 1989.
Get beautiful apps from experts. We are your partner to develop modern tailored apps for iOS & Android.
User Acquisition
User acquisition is the main reason for picking a cool game name. A strong user acquisition strategy is the catalyst for success that mobile games need. A good strategy will focus on acquiring as many valuable users in the shortest time possible.
Valuable users are the players that play the most and are willing to spend money. You’ll want to find these users soon after you launch as promoting and marketing your game can become expensive.
And even if you’re advertising your game on a shoestring budget, it’s a time-consuming activity. Your focus after launch will be split between different tasks, so having one activity take up most of your time isn’t ideal.
Get your name right and you’ll have more time for bug fixes, updates and improvements.
Word of Mouth
One aspect of user acquisition that’s hard to measure is how many users you can gain through word of mouth. If players enjoy your game, then they’re likely to tell their friends about it. At this point, the friend will consider if they should download your game as well.
This is where your game name comes into play. It’s the key piece of info that your player’s friends will need to download your game. And if you get it right, it can help to convince your player’s friends that your game is in fact worth downloading.
You also want to make sure that your game name is compelling enough for them not to get distracted by other titles. The worst case scenario would be for a potential user to enter an app store to get your game and then end up with something else.
Cool vs. Uncool
Here’s a simple example that shows this in action. Which game title seems more appealing: Metal Gear Solid or If It Moves, Shoot it?
Although you can’t tell what kind of game Metal Gear Solid is just from the title, it’s a unique name that creates curiosity and intrigue. If It Moves, Shoot it sounds simplistic and more than anything, it describes gameplay that’s been seen thousands of times before. Based on the name alone, Metal Gear Solid has already come out on top.
An interesting name can bring players to your game all on its own. But it’s not always easy to know what will catch people’s attention.
On top of that, you need to factor in app store optimization guidelines when naming your title. While cool game names can bring in users through word of mouth, it’s also important that your game can be found the traditional way in app stores.
What to Consider When Choosing Cool Game Names?
Whether it’s cool or not, your mobile game’s name needs to be optimized. There is a range of ways to optimize your game’s app store page. These include having a great icon and taking full advantage of the allowances for descriptions and screenshots.
For a full rundown on how to master ASO, check out this post.
When it comes to the cool names for games, here are the key points for you to consider.
Prioritize the First 25 Characters
The first 25 characters of your game’s name are the most important. These are the characters that will be displayed as potential users browse games on app stores. In total, you’re allowed to have 255 characters in your game’s name on the App Store, while Google Play has no limit. Some people even suggest prioritizing the first 11 characters more than anything.
Use URL Friendly Characters
It’s essential that you only use standard characters when naming your game. Special characters such as copyright or trademark symbols are to be avoided at all costs. They significantly damage your chances of turning up in app store searches. Your game name should also be easy to type on a mobile device. Using URL friendly characters will help you to achieve this.
Use Keywords to Improve Search Results
While you can use the first 25 characters of your title to give your game a cool name, the rest of your character limit should include some keywords to improve your chances of being found in user searches.
This would look something like this, “Game Title: keywords describing the game such as genre, themes, feature, etc.”
The Game Title is how the game will be known to players. The keywords are there to help out the app store search engines and make sure that your game appears for the right search terms.
Pick Something Unique
This one might seem obvious but picking a unique name for the app stores might be harder than you think. The sheer volume of titles means that there’s bound to be something similar to whatever you have in mind.
The best way to figure this out is to search for the title you have in mind and see what results turn up. These are the games that you’ll have to compete against.
When you get a whole bunch of names similar to what you had in mind then you’ll need to go straight back to the drawing board and come up with something completely new.
If there’s a similar game name to yours, then you’ll have to compete with it for rankings. And if it’s already well-established, this could prove to be a waste of time. You might be able to overtake the established title, but it could require a lot of effort. Picking another name might prove to be a lot easier in the long run.
If you’re satisfied that the titles that turn up for your game name are different enough, then you’re in the clear!
Naming for App Store Optimization
It’s important to consider app store optimization when picking a name, even if it means making changes to a name that you’ve become attached to. You have to remember that the ultimate goal is to get players for your game.
If a slight name change can increase the amount of players you get, then go for it. You’ll be happier with the results in the end.
Rules of Choosing a Name
The following guidelines will help you to pick a great name for your game. It might be hard to pick a name that meets all these criteria but they’re worth keeping in mind. Of course, rules are also there to be broken!
Avoid Filler Words
The most important rule of naming your game is to avoid including words that don’t need to be there. This rule applies to the first 25 characters of your game name, which you need to prioritize for ASO. These 25 characters will form the title of your game.
Any keywords that you want to include should be included after this 25 character mark.
Be Concise
This might seem obvious if you’ve already prioritized the first 25 characters of your game title. What might not be so obvious is the word count. It’s best to keep your game name concise, meaning you should use less than 3 words to name your game.
Any more than 3 words and you’re bordering on a full length sentence. Keep your game name short and snappy.
Be Authentic
Authenticity is important to gamers. One way to seem completely inauthentic is to try and piggy back on the success of another game title. Try to avoid using “Flappy”, “Angry” or “Clash” in your game title. These will all come off as attempts to capitalize on the popularity of other games.
The most important thing to remember is that players should be able to relate your game title to your game in some way.
Pick Something Players Can Say and Spell!
Breaking this rule is not recommended. When you pick your game name, it should be clear how it should sound. And it should also be easy for players to spell once they’ve heard it aloud. If you pick a game name with ambiguous pronunciation, you might be damaging your word of mouth acquisitions.
Confusion as to how your game is called and subsequently how it’s spelled will make it harder for players to look it up in app stores. It can also damage your marketing and promotion efforts if potential players don’t realize what game is being discussed.
Name Generation
If you’ve followed all these guidelines and concepts, and you’re still not happy with the names you’ve come up with, you have another option. There are a couple of tools that can help you to generate names for your games.
Of course, they might need some tweaking to become app store ready. Here are a couple of tools and methods to help you with name generation.
The Video Game Name Generator
The video game name generator is a free online tool that creates game titles at the click of a mouse. While the suggested titles mightn’t be ready for the app store or seem like the title of a specific game, it can provide you with some good jumping off points. You can check out the Video Game name Generator here!
Business Name Generator
There are a number of online tools like the Video Game Name Generator that you can use to get title ideas. Another one to check out is the Shopify Business Name Generator. Just like the Video Game Name Generator, it gives a tonne of suggestions at the click of a button.
Product Name Generator
The Product Name Generator gives you name ideas but it also lets you know if there is a domain available for that given name. This can be useful for setting up landing pages for your game. A landing page can be another piece of your user acquisition strategy so this tool can be a great time saver.
Worksheets
Although it’s not the most high tech solution, sometimes getting out a pen and paper can still be an effective tool for naming your game. There are two great worksheets that can help you with this process.
The first one is a naming worksheet that helps you to assign different names to your game according to some pre-defined categories.
The second worksheet asks you to apply a set of criteria to your name in order to make sure you create a strong brand. You should apply these criteria to any name you’re planning to use. It asks a number of useful questions to ensure you have a name that works.
Ready to Name Your Game?
Now that you know what you’re doing, you can name your game with confidence. You can let us know your game’s name in the comments and make sure to share this post with your friends. You can use the buttons on the left to Tweet or Like this article!
Download Felgo Now!
Get your free cross-platform development tool and create awesome mobile app and game titles in just a few days. Felgo was ranked higher than Unity, Corona and Cocos2D for customer satisfaction, ease-of-use and time saving!
Watch This!
Check out this quick tutorial on the basics of Felgo and how to make a game in 15 minutes!
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- | https://blog.felgo.com/mobile-development-tips/cool-names-for-games-how-to-pick-the-best-title | CC-MAIN-2021-04 | refinedweb | 2,409 | 72.26 |
A simple python CLI that can relieve stress while running long scripts.
Project description
Simple python tic that can ease your mind
We've all been there...staring at the empty terminal while your fans go crazy.
I hope this relieves the intensity while you are waiting for that return statement.
How To
- Install
tictronomefrom
pip
$ pip install tictronome
- Import
Ticobject from
tictronome.
from tictronome.tictronome import Tic
- Initialize it before some time-consuming process.
tic = Tic()
- Call the
startmethod of
Ticto start.
# start of your script tic.start() some_long_function(...)
- Call the
stopmethod of
Ticto stop.
# end of your script tic.stop()
- Run your script!
$ python3 your_script.py
Notes
When you create a Tic instance, you can change the color of the loading character by adding colors.
Disclaimer
Depending on your shell settings(themes) colors might appear differently.
from tictronome import Colors Tic(color=Colors.CYAN)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
tictronome-0.1.1.tar.gz (4.5 kB view hashes) | https://pypi.org/project/tictronome/ | CC-MAIN-2022-33 | refinedweb | 182 | 70.7 |
How TO Produce animal voices.. In c++ code Try mixing the infinite frequencyes and lasts of beeps ;)
reverse a wordyou must write after the code, just before the last breace
"system ("pause");
return 0;"
or, even ...
comparing characters of string c programmingthis compares nothing ;) try this
[code]
#include <iostream>
using namespace std;
int main ()
{
cha...
How make my player jumpI'm not an expert of this kind of situation but I think something like this
[Class].setYpos([Class]...
delete a value in an arrayI modifyed the code inserting a i-- to fix this error... Thanks @JLB... :)
This user does not accept Private Messages | http://www.cplusplus.com/user/luigibana96/ | CC-MAIN-2014-10 | refinedweb | 103 | 62.98 |
Style in jQuery Plugins and Why it Matters
"Cowboy" Ben Alman | May 14, 2010
Most plugin authors are web designers or developers who have created some cool functionality that they want to share with others. Unfortunately, many plugin authors haven't spent enough time examining other people's code to see what really works and what doesn't.
While the plugin authoring choices you make are ultimately up to you, hopefully you will find these suggestions useful when it comes to developing your personal style and creating plugins, as they are based on real code used in a number of popular jQuery plugins.
This article is split into these main sections:
1. A few "pro tips"
When you start creating your plugin, the choices you make will greatly influence not just how well your code performs, but also how maintainable your code is. Remember that as better performing plugin code leads to better overall website or application performance, well thought out, readable code will help ensure that future efforts to maintain or update your plugin will not end in frustration.
Being able to find a balance between these concerns might seem daunting, but as long as you are mindful of your options, you should be able to find a comfortable middle ground.
DRY = Don't repeat yourself
Unfortunately, developers often learn this the hard way. Basically, after spending enough time and effort having to maintain enough poorly-organized code, eventually you'll learn that it's ultimately easiest when you don't repeat yourself. So, in the interest of saving you a lot of headaches later on, I'm mentioning this now.
If you're using the same complex expression in more than one place, assign its value to a variable and use that variable. If you're using the same block of code in more than one place, extrapolate that code into a function that can be called with arguments as needed. The fewer versions of same- or similarly-functioning code you have in your plugin, the easier that plugin is to test and maintain.
Also, because you might not realize that you've repeated yourself until you step back and look at your code, be sure to do just that periodically. Step back, look at your code, and DRY things up a bit!
// Not great: that's a lot of repeated, not-at-all-DRY code. $('body') .bind( 'click', function(e){ console.log( 'click: ', e.target ); }) .bind( 'dblclick', function(e){ console.log( 'dblclick: ', e.target ); }) .bind( 'keydown', function(e){ console.log( 'keydown: ', e.target ); }) .bind( 'keypress', function(e){ console.log( 'keypress: ', e.target ); }) .bind( 'keyup', function(e){ console.log( 'keyup: ', e.target ); }); // Good: code is more DRY, which is a huge improvement, but it's not as // obvious what is being done at first glance. function myBind( name ) { $('body').bind( name, function(e){ console.log( name + ': ', e.target ); }) }; myBind( 'click' ); myBind( 'dblclick' ); myBind( 'keydown' ); myBind( 'keypress' ); myBind( 'keyup' ); // Better: the handler has been generalized to use the event.type property, // and it's totally obvious what is being done, even at first glance. function myHandler( e ) { console.log( e.type + ': ', e.target ); }; $('body') .bind( 'click', myHandler ) .bind( 'dblclick', myHandler ) .bind( 'keydown', myHandler ) .bind( 'keypress', myHandler ) .bind( 'keyup', myHandler ); // Best: really knowing how the jQuery API works can reduce your // code's complexity and make it even more readable. $('body').bind( 'click dblclick keydown keypress keyup', function(e){ console.log( e.type + ': ', e.target ); });
Use the jQuery API
As you can see from the example above, there's no substitute for knowing how to best use the built-in jQuery methods. Read through the API documentation and examples, and by all means, examine the jQuery source as well as the source of other plugins.
The better you know and use the jQuery API, the cleaner and more readable your code will be. In addition, by utilizing the built-in jQuery methods, you will often be able to eliminate portions of your own code, resulting in less for you to have to maintain.
Avoid premature optimization
While optimization can be very important, it's not as important as just getting your code to work, period. The biggest issue with optimization is often that optimized code is often not as easy to read and understand as the pre-optimized code.
The first rule of optimization is only optimize things that need to be optimized. If everything works great, and there are no performance or file size problems, you probably don't need to refactor that code to be faster or smaller and completely unreadable, thus unmaintainable. Spend more of your time working writing code that works.
Here's some code that I progressively optimized for size, resulting in something that minifies quite small, but is pretty much incomprehensible. How maintainable is that? Not very. Either way, I didn't even consider optimizing it until I had it completely working.
The second rule of optimization is “don't optimize too early”. If you're still fleshing out your API, don't go complicating things by reducing very easy-to-read but slightly inefficient logic to a point where you're not even sure what it's doing any more. Save that for the end, after you've written your unit tests. Then you can actually refactor your code while checking for regressions.
Avoid over-avoiding premature optimization
It's unacceptable to use the aforementioned "don't optimize prematurely" mantra as an excuse to write bad code, which brings me to this point. While your code should be as readable as possible, it shouldn't be blatantly un-optimized.
For example, by caching jQuery objects and/or chaining methods, you can see a huge performance boost in your code. A lot of people have said a lot of things about this topic, and while there are many more performance anti-patterns that you should be aware of, I'll simply illustrate this with one example and leave it up to you to do some more performance "best practice" research on your own.
// Bad: very slow, and not even remotely DRY. $('#foo').appendTo( 'body' ); $('#foo').addClass( 'test' ); $('#foo').show(); // Good: the jQuery object reference is "cached" in elem. var elem = $('#foo') elem.appendTo( 'body' ); elem.addClass( 'test' ); elem.show(); // Even better: jQuery methods are chained. $('#foo') .appendTo( 'body' ) .addClass( 'test' ) .show(); // And you can even combine caching with chaining, which can be especially // useful in conditionals. var elem = $('#foo').appendTo( 'body' ); if ( some_condition ) { elem.addClass( 'test' ); } else { elem.show(); }
2. Playing nice with others
If you want people to use your plugin, it has to not only provide the functionality that they're expecting, but it has to coexist peacefully with the other code they're using. If people try to use your plugin, and it messes around with other parts of their code, they will stop using your plugin.
People want to use plugins that behave themselves, and if your plugin doesn't behave, they'll find one that will.
Don't modify objects you don't own
I'm not going to get too in-depth on this topic, because Nicholas Zakas already has, so read his article on not modifying objects you don't own. This article directly addresses "if people try to use your plugin, and it messes around with other parts of their code, they won't use your plugin anymore."
Declare your variables
Always declare your variables before you use them, using the varkeyword. Instead of undeclared variables being local, like you might expect, they are actually created in the global scope. These "implicit globals" can conflict with other code, which is bad--see the previous point.
In addition, when trying to maintain code that contains implicit globals, it's often very difficult to know where else those variables are being used, making future maintenance much more complicated and time-consuming.
Use a closure
To that end, by putting your code inside a closure (aka. function), your plugin can have private properties and methods that won't "leak" out into the global namespace. In addition, by keeping this function anonymous and executing it immediately, even this function won't clutter up the global namespace with a leftover reference.
By surrounding your closure function with (...)(); you invoke it immediately, and you can pass in any variables you'd like. In the following example, the top-level plugin function is executed immediately, passing in a reference to jQuery that will be used internally as $. The benefit of this approach is that even if jQuery is operating in noConflict mode, you can still use $in your code, keeping your code very readable.
// Ultra-basic jQuery plugin pattern. (function($){ var myPrivateProperty = 1; // Call this public method like $.myMethod(); $.myMethod = function(){ // Your non-element-specific jQuery method code here. }; // Call this public method like $(elem).myMethod(); $.fn.myMethod = function(){ return this.each(function(){ // Your chainable "jQuery object" method code here. }); }; function myPrivateMethod(){ // More code. }; })(jQuery);
Use namespaces when binding event handlers
When binding and unbinding event handlers, use a namespace or function reference for easy and reliable unbinding, without fear of conflicting with other code.
// Bad: this method will unbind every other plugin's 'body' click handlers! $('body').bind( 'click', handler ); // Bind. $('body').unbind( 'click' ); // Unbind. // Good: only the 'body' click handlers bound with the 'yourNamespace' // namespace are unbound! $('body').bind( 'click.yourNamespace', handler ); // Bind. $('body').unbind( 'click.yourNamespace' ); // Unbind. // Also good: only the 'body' click handlers that reference the 'handler' // function are unbound (note that since this method requires a function // reference, it will not work for events bound with an inline anonymous // function) $('body').bind( 'click', handler ); // Bind. $('body').unbind( 'click', handler ); // Unbind.
Use unique data names
Just like with method names and event namespaces, when storing data on an element, use a name that's sufficiently unique. Using an overly generic name can cause conflicts.
Also, if you're going to be storing many values in element data, instead of using many individual data names, consider using a single object which will effectively provide a namespace for your properties.
function set_data() { var data = { text: 'hello world', awesome: false }; // Store data object all-at-once. $('#foo').data( 'yourPluginName', data ); // Updating data.xyz properties will update the data store as well. data.awesome = true; data.super_awesome = true; }; function get_data() { var data = $('#foo').data( 'yourPluginName' ); alert( data.super_awesome ); } set_data(); get_data(); // Alerts true.
3. Elements of style
It can be argued that because coding style is fairly subjective, there's no right or wrong way to do it. The thing is, that argument gets thrown out the door as soon as other people start looking at your code, because they might need to be able to decipher said code in order to track down a bug or add a feature.
When you're coding, don't be afraid to step back for a moment and take a look at the code that you are producing. Is this code well organized? Does it make sense? Is it readable? If it isn’t now, it certainly won’t be in six months when you want to add a new feature.
The most important thing to remember here is that you need to be consistent in developing your own personal style. You don't have to follow these style guidelines, but if you decide to do things your own way, at least have a good reason for it.
Also see Douglas Crockford's article onCode Conventions for JavaScript, upon which many of these suggestions are based.
Line length
Believe it or not, some people still code in a terminal shell, which means that any line over 80 characters is going to wrap in an ugly way. Yes, these people can make their terminal window wider, but you should also ask yourself if any line of code really needs to extend past column 80. If it doesn't, since you can manually wrap your code onto another line better than someone's dumb text editor can, do it!
Suggesting line length limits may sound rather draconian to you, but let me put it into a slightly different context. You know all those web sites with code samples that are hundreds of lines long but also have a horizontal scrollbar, except you can't see the horizontal scrollbar because the code sample is taller than the window? So, you scroll down to the bottom of the code sample to move the scrollbar to the right, except the code you want to read has been scrolled up and out of the viewport, so now you have to scroll back up?
You probably get the idea. Horizontal scrolling in text editors and code samples is horrible. Kill two birds with one stone here, keep your line lengths reasonable, and everyone wins.
Tabs vs. spaces
The age old debate: which is better, tabs or spaces? While it can be argued that the tab character is a more semantically meaningful way to express indentation, and while many text editors allow changing the width of the tab character, not all do. Tabs render especially wide in browsers' source views (and often in article code examples as well), and tabs appearing at the end of a line of code can make quite a mess of end-of-line comments.
2- or 4-space indents, on the other hand, offer enough indentation to differentiate levels of nesting in your code, while leaving enough room on the line for lots of actual code. While many people advocate using 4-space indents, I personally recommend 2-space indents in order to minimize horizontal scrolling or wrapping.
Which of these is easier to read?
// Tabs (simulated with 8-space indents) function inArray( elem, array ) { if ( array.indexOf ) { return array.indexOf( elem ); } for ( var i = 0, length = array.length; i < length; i++ ) { if ( array[ i ] === elem ) { return i; } } return -1; }; // 2-space indents function inArray( elem, array ) { if ( array.indexOf ) { return array.indexOf( elem ); } for ( var i = 0, length = array.length; i < length; i++ ) { if ( array[ i ] === elem ) { return i; } } return -1; };
Regardless of your preferred tab settings, the most important thing is that you indent consistently, lining up similarly-nested code blocks appropriately, for maximum readability. Still, wouldn't this code be a bit easier to follow if there was less horizontal scrolling?
Crowding arguments or code blocks
Sometimes less is more, but with whitespace, more is often more. Giving function arguments or code blocks a little "breathing room" often makes things a little more readable. Like anything else, whitespace can be taken to the extreme, resulting in less readable code. Since the ultimate goal is to create more maintainable code, you need to learn to use discretion here, but just remember: don't be overly frugal with whitespace!
Which of these is easier to read?
// Crowded. function inArray(elem,array) { if (array.indexOf) { return array.indexOf(elem); } for (var i=0,length=array.length;i<length;i++) { if (array[i]===elem) { return i; } } return -1; }; // Ahh.. a little bit of breathing room. function inArray( elem, array ) { if ( array.indexOf ) { return array.indexOf( elem ); } for ( var i = 0, length = array.length; i < length; i++ ) { if ( array[ i ] === elem ) { return i; } } return -1; };
Write useful comments. You don’t have to write a book, and you should avoid commenting completely obvious code, but just think about who your target audience is going to be, and consider the possibility that they might not be familiar with the variable you happen to be referencing or the specific pattern you may be using.
The person looking at your source code is often someone who is troubleshooting a bug, wants to extend your plugin to make it more awesome, or just wants to learn from what you've done. You should do everything you can to facilitate comprehension of your code.
Also, more often than not, that person who wants to troubleshoot a bug or extend your plugin is you, except that just enough time has passed so that you don't remember why you coded that bit the way you did. Not only will your comments help other people, but they'll help you as well!
Curly braces
While it's technically valid to omit curly braces around blocks in certain scenarios, it can make code look slightly more ambiguous, so don't do it. Always use curly braces to avoid ambiguity where possible, and your code will be more readable.
Also, in JavaScript, because the location where an opening curly brace is placed can change how your code behaves, be sure to format your return statements properly by specifying any opening curly brace on the same line as the returnstatement.
And to be consistent (because consistency is Good), try to put all your opening curly braces on the same line as the statement they follow.
// Not great: this example is relatively obvious. if ( a === 1 ) b = 2; // Bad: this example is somewhat ambiguous.. if ( a === 1 ) b = 2; c = 3; // Oops! This is how the previous example is actually interpreted! if ( a === 1 ) { b = 2; } c = 3; // Good: this example is completely obvious. if ( a === 1 ) { b = 2; } // Good: this example is also completely obvious. if ( a === 1 ) { b = 2; c = 3; } // Bad: see the above-mentioned link on how the location of opening curly // braces can change how your code behaves. function test() { return { property: true }; }; // Good: not only does the function return the object as-expected, but a // consistent { } style is being used. function test() { return { property: true }; };
4. In conclusion
Ultimately, while there are many different plugin style considerations, the most important thing is for you to focus on developing your own personal style that balances efficiency with readability and maintainability. Your plugin not only needs to work well, but it needs to be easy to update and maintain, because you're going to be the person maintaining it!
Also, be sure to not only experiment with your own code, but also take time to examine the jQuery source as well as the source of other plugins. The more "other" code you see, the better equipped you will be to decide what works and what doesn't work, which will help you make more informed decisions in the end.
About the Author
.jpg).
Find Ben on:
- Twitter - @cowboy
- Ben's Website
- GitHub
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | https://msdn.microsoft.com/en-us/magazine/ff696759 | CC-MAIN-2018-26 | refinedweb | 3,075 | 64 |
Overview
Atlassian SourceTree is a free Git and Mercurial client for Windows.
Atlassian SourceTree is a free Git and Mercurial client for Mac.
yamlite is simple limited YAML parser, designed for
minimalistic human friendly YAML subsets. It is a single
file that you can copy into your own project without any
packaging cruft.
Human-friendly YAML subset
It is a text format that you can understand without raising (too much) questions, and which is valid YAML at the same time. For example:
package: yamlite decription: parser for human-friendly YAML subset version: 2.0
Parser evolves over time. Things that are already parsed can be seen in dataset directory.
To see how
yamlite parses your file, use:
python -m yamlite your.yaml
Usage
For command line usage just run it as a module:
python -m yamlite your.yaml
The easiest way is to use API is to copy
yamlite.py
into your project path and import from there.
import yamlite config = None with open('config.yaml', 'rb') as yf: config = yamlite.parse(yf.read()) print(config)
Limitations <a name="limited"></a>
yamlite doesn't do any type autoconversion. All
returned values are strings. For example, the version
field in the following example:
package: yamlite version: 1.2
It will still be string '1.2' after parsing. If you load the same data with PyYAML, it will be float.
In future,
yamlite may include helper to convert
types of known fields, but still probably return
values as strings by default. For simplicity.
Data driven development
Development is data based. First a new example is
added into
dataset/ directory with a name like
01.in.yaml, which contains data that should be
parsed.
01.out.txt in the same dir contains output
expected from
yamlite. There can be other files,
such as
04.out.pyyaml.txt that contain output from
other tools when it differs.
Status
The goal for version 1.0 is to parse the following examples:
app.yaml from GAE
ansible configs
tmuxp
salt
Guarantess
Data-driven develompent means that if your example of
YAML syntax is not added to dataset, there is no
guarantee that
yamlite supports it even if current
version parses it correctly.
Enhance and experiment
Get latest version:
hg clone
Make sure current
yamlite works:
yamlite --test
Generate reports to compare output with PyYAML:
dataset/update.py
Add new sample to dataset/, rerun above commands to
test it and modify
yamlite if necessary.
License
Feel free to apply either public domain or MIT license to this code, which means "attribution is appreciated, but not required".
Links - complete YAML 1.1 parser by Kirill Simonov - Python wiki page about YAML - yamlite development
Changes
0.3 (TBD)
single string
compatibility report in html format
- frames for parser state
- --trace argument dumps parser state to files
0.2 (2013-12-10)
- top level list
- key-value with string starting on the next line
- key-value with list
0.1 (2013-11-25)
- proof-of-concept parser for sequence of key-value pairs
- complete dataset/ + framework scaffolding for easy data driven development | https://bitbucket.org/techtonik/yamlite | CC-MAIN-2017-04 | refinedweb | 512 | 58.89 |
Java Object-oriented programming
last modified July 6, 2020
This part of the Java tutorial is an introduction to object-oriented programming in Java. We mention Java objects, object attributes and methods, object constructors, and access modifiers. Furthermore, we talk about the super keyword, constructor chaining, class constants, inheritance, final classes, and private constructors.
There are three widely used programming paradigms: procedural programming, functional programming, and object-oriented programming. Java is principally an object-oriented programming language. Since Java 8, it has some support of the functional programming.
Object-oriented programming
Object-oriented programming (OOP) is a programming paradigm that uses objects and their interactions to design applications and computer programs.
The following are basic programming concepts in OOP:
- Abstraction
- Polymorphism
- Encapsulation
- Inheritance
The abstraction is simplifying complex reality by modeling classes appropriate to the problem. The polymorphism is the process of using an operator or function in different ways for different data input. The encapsulation hides the implementation details of a class from other objects. The inheritance is a way to form new classes using classes that have already been defined.
Java objects
Objects are basic building blocks of a Java OOP program. An object is a combination of data and methods..
package com.zetcode; class Being {} public class SimpleObject { public static void main(String[] args) { Being b = new Being(); System.out.println(b); } }
In our first example, we create a simple object.
class Being {}
This is a simple class definition. The body of the template is empty. It does not have any data or methods.
Being b = new Being();
We create a new instance of the
Being class. For this we have the
new keyword. The
b variable is the handle to
the created object.
System.out.println(b);
We print the object to the console to get some basic description
of the object. What does it mean, to print an object? When we print
an object, we in fact call its
toString() method. But
we have not defined any method yet. It is because every object created
inherits from the base
Object. It has some elementary
functionality which is shared among all objects created. One of this
is the
toString() method.
$ javac com/zetcode/SimpleObject.java $ ls com/zetcode/ Being.class SimpleObject.class SimpleObject.java
The compiler creates two class files. The
SimpleObject.class is
the application class and the
Being.class is the custom class
that we work with in the application.
$ java com.zetcode.SimpleObject com.zetcode.Being@125ee71
We get a the name of the class of which the object is an instance, the @ character, and the unsigned hexadecimal representation of the hash code of the object.
Java object attributes
Object attributes is the data bundled in an instance of a class. The object attributes are called instance variables or member fields. An instance variable is a variable defined in a class, for which each object in the class has a separate copy.
package com.zetcode; class Person { public String name; } public class ObjectAttributes { public static void main(String[] args) { Person p1 = new Person(); p1.name = "Jane"; Person p2 = new Person(); p2.name = "Beky"; System.out.println(p1.name); System.out.println(p2.name); } }
In the above Java code, we have a
Person class with
one member field.
class Person { public String name; }
We declare a name member field. The
public
keyword specifies that the member field will be accessible
outside the class block.
Person p1 = new Person(); p1.name = "Jane";
We create an instance of the
Person class and set the
name variable to "Jane". We use the dot operator to access the
attributes of objects.
Person p2 = new Person(); p2.name = "Beky";
We create another instance of the
Person class.
Here we set the variable to "Beky".
System.out.println(p1.name); System.out.println(p2.name);
We print the contents of the variables to the console.
$ java com.zetcode.ObjectAttributes Jane Beky
We see the output of the program. Each instance of the
Person
class has a separate copy of the name member field.
Java methods.
package com.zetcode; class Circle { private int radius; public void setRadius(int radius) { this.radius = radius; } public double area() { return this.radius * this.radius * Math.PI; } } public class Methods { public static void main(String[] args) { Circle c = new Circle(); c.setRadius(5); System.out.println(c.area()); } }
In the code example, we have a
Circle class. In the class,
we define two methods. The
setRadius() method assigns a
value to the
radius member and the
area() method
computes an area of the circle from the class member and a constant.
private int radius;
We have one member field in the class.. The dot operator is used to call the method.
public double area() { return this.radius * this.radius * Math.PI; }
The
area() method returns the area of a circle. The
Math.PI is a built-in constant.
$ java com.zetcode.Methods 78.53981633974483
Running the example we get the above output.
Java access modifiers
Access modifiers set the visibility of methods and member fields.
Java has three access modifiers:
public,
protected, and
private. The
public members can
be accessed from anywhere. The
protected members can be accessed
only within the class itself, by inherited classes, and other classes from the same package.
Finally, the
private members are limited to the containing type, e.g. only within its
class or interface. If we do not specify an access modifier, we have a
package-private visibility. In such a case, members and methods are accessible within
the same package.
Access modifiers protect data against accidental modifications. They make the programs more robust.
The above table summarizes Java access modifiers (+ is accessible, o is not accessible).
package com.zetcode; class Person { public String name; private int age; public int getAge() { return this.age; } public void setAge(int age) { this.age = age; } } public class AccessModifiers { public static void main(String[] args) { Person p = new Person(); p.name = "Jane"; p.setAge(17); System.out.println(String.format("%s is %d years old", p.name, p.getAge())); } }
In the above program, we have two member fields: public and private.
public int getAge() { return this.age; }
If a member field is private, the only way to access it is via
methods. If we want to modify an attribute outside the class, the
method must be declared
public. This is an important aspect of
data protection.
public void setAge(int age) { this.age = age; }
The
setAge() method enables us to change the private
age variable from outside of the class definition.
Person p = new Person(); p.name = "Jane";
We create a new instance of the
Person class. Because the
name attribute is
public, we can access it directly.
However, this is not recommended.
p.setAge(17);
The
setAge() method modifies the
age member field. It cannot
be accessed or modified directly, because it is declared
private.
System.out.println(String.format("%s is %d years old", p.name, p.getAge()));
Finally, we access both members to build a string, which is printed to the console.
$ java com.zetcode.AccessModifiers Jane is 17 years old
Running the example we have this output.
The following program shows how access modifiers influence the way members are inherited by subclasses.
package com.zetcode; class Base { public String name = "Base"; protected int id = 5323; private boolean isDefined = true; } class Derived extends Base { public void info() { System.out.println("This is Derived class"); System.out.println("Members inherited:"); System.out.println(this.name); System.out.println(this.id); // System.out.println(this.isDefined); } } public class ProtectedMember { public static void main(String[] args) { Derived drv = new Derived(); drv.info(); } }
In this program, we have a
Derived class which
inherits from the
Base class. The
Base class
has three member fields, all with different access modifiers. The
isDefined
member is not inherited. The
private modifier prevents this.
class Derived extends Base {
The
Derived class inherits from the
Base class. To inherit
from another class, we use the
extends keyword.
System.out.println(this.name); System.out.println(this.id); // System.out.println(this.isDefined);
The
public and the
protected members are
inherited by the
Derived class. They can be accessed. The
private member is not inherited. The line accessing
the member field is commented. If we uncommented the line, the code
would not compile.
$ java com.zetcode.ProtectedMember This is Derived class Members inherited: Base 5323
Running the program, we receive this output.
Java constructor
A constructor is a special kind of a method. It is automatically called
when the object is created. Constructors do not return values and also do not
use the
void keyword.
The purpose of the constructor is to initiate the state of an object.
Constructors have the same name as the class. The constructors
are methods, so they can be overloaded too. Constructors cannot be directly invoked.
The
new keyword invokes them. Constructors cannot be declared
synchronized, final, abstract, native, or static.
Constructors cannot be inherited. They are called in the order of inheritance. If we do not write any constructor for a class, Java provides an implicit default constructor. If we provide any kind of a constructor, then the default is not supplied.
package com.zetcode; class Being { public Being() { System.out.println("Being is created"); } public Being(String being) { System.out.println(String.format("Being %s is created", being)); } } public class Constructor { @SuppressWarnings("ResultOfObjectAllocationIgnored") public static void main(String[] args) { new Being(); new Being("Tom"); } }
We have a Being class. This class has two constructors. The first one does not take parameters, the second one takes one parameter.
public Being() { System.out.println("Being is created"); }
This constructor does not take any parameters.
public Being(String being) { System.out.println(String.format("Being %s is created", being)); }
This constructor takes one string parameter.
@SuppressWarnings("ResultOfObjectAllocationIgnored")
This annotation will suppress a warning that we do not assign our created objects to any variables. Normally this would be a suspicious activity.
new Being();
An instance of the
Being class is created. The no-argument constructor
is called upon object creation.
new Being("Tom");
Another instance of the
Being class is created. This
time the constructor with a parameter is called upon object creation.
$ java com.zetcode.Constructor Being is created Being Tom is created
This is the output of the program.
In the next example, we initiate data members of the class. Initiation of variables is a typical job for constructors.
package com.zetcode; import java.util.Calendar; import java.util.GregorianCalendar; class MyFriend { private GregorianCalendar born; private String name; public MyFriend(String name, GregorianCalendar born) { this.name = name; this.born = born; } public void info() { System.out.format("%s was born on %s/%s/%s\n", this.name, this.born.get(Calendar.DATE), this.born.get(Calendar.MONTH), this.born.get(Calendar.YEAR)); } } public class MemberInit { public static void main(String[] args) { String name = "Lenka"; GregorianCalendar born = new GregorianCalendar(1990, 3, 5); MyFriend fr = new MyFriend(name, born); fr.info(); } }
We have a
MyFriend class with data members and methods.
private GregorianCalendar born; private String name;
We have two private variables in the class definition.
public MyFriend(String name, GregorianCalendar born) { this.name = name; this.born = born; }
In the constructor, we initiate the two data members. The
this
variable is a handler used to reference the object variables from methods. When the names
of constructor parameters and the names of members are equal, using
this
keyword is required. Otherwise, the usage is optional.
MyFriend fr = new MyFriend(name, born); fr.info();
We create a
MyFriend object with two arguments. Then we call
the
info() method of the object.
$ java com.zetcode.MemberInit Lenka was born on 5/3/1990
This is the output of the
com.zetcode.MemberInit program.
Java super keyword
The
super keyword is a reference variable that
is used in subclasses to refer to the immediate parent class object.
It can be use to refer to the parent's a) instance variable, b) constructor,
c) method.
package com.zetcode; class Shape { int x = 50; int y = 50; } class Rectangle extends Shape { int x = 100; int y = 100; public void info() { System.out.println(x); System.out.println(super.x); } } public class SuperVariable { public static void main(String[] args) { Rectangle r = new Rectangle(); r.info(); } }
In the example, we refer to the parent's variable with the
super keyword.
public void info() { System.out.println(x); System.out.println(super.x); }
Inside the
info() method, we refer to the parent's instance
variable with the
super.x syntax.
If a constructor does not explicitly invoke a superclass constructor, Java automatically inserts a call to the no-argument constructor of the superclass. If the superclass does not have a no-argument constructor, we get a compile-time error.
package com.zetcode; class Vehicle { public Vehicle() { System.out.println("Vehicle created"); } } class Bike extends Vehicle { public Bike() { // super(); System.out.println("Bike created"); } } public class ImplicitSuper { public static void main(String[] args) { Bike bike = new Bike(); System.out.println(bike); } }
The example demonstrates the implicit call to the parent's constructor.
public Bike() { // super(); System.out.println("Bike created"); }
We get the same result if we uncomment the line.
$ java com.zetcode.ImplicitSuper Vehicle created Bike created com.zetcode.Bike@15db9742
Two constructors are called when a
Bike object
is created.
There can be more than one constructor in a class.
package com.zetcode; class Vehicle { protected double price; public Vehicle() { System.out.println("Vehicle created"); } public Vehicle(double price) { this.price = price; System.out.printf("Vehicle created, price %.2f set%n", price); } } class Bike extends Vehicle { public Bike() { super(); System.out.println("Bike created"); } public Bike(double price) { super(price); System.out.printf("Bike created, its price is: %.2f %n", price); } } public class SuperCalls { public static void main(String[] args) { Bike bike1 = new Bike(); Bike bike2 = new Bike(45.90); } }
The example uses different syntax of
super to call
different parent constructors.
super();
Here, we call the parent's no-argument constructor.
super(price);
This syntax calls the parent's constructor that takes one parameter: the bike's price.
$ java com.zetcode.SuperCalls Vehicle created Bike created Vehicle created, price 45.90 set Bike created, its price is: 45.90
This is the example output.
Java constructor chaining
Constructor chaining is the ability to call another constructor from a constructor.
To call another constructor from the same class, we use the
this keyword.
To call another constructor from a parent class, we use the
super keyword.
package com.zetcode; class Shape { private int x; private int y; public Shape(int x, int y) { this.x = x; this.y = y; } protected int getX() { return this.x; } protected int getY() { return this.y; } } class Circle extends Shape { private int r; public Circle(int r, int x, int y) { super(x, y); this.r = r; } public Circle() { this(1, 1, 1); } @Override public String toString() { return String.format("Circle: r:%d, x:%d, y:%d", r, getX(), getY()); } } public class ConstructorChaining { public static void main(String[] args) { Circle c1 = new Circle(5, 10, 10); Circle c2 = new Circle(); System.out.println(c1); System.out.println(c2); } }
We have a
Circle class. The class has two constructors.
One that takes one parameter and one that does not take any parameters.
class Shape { private int x; private int y; ... }
The
Shape class is responsible for dealing with the
x and
y coordinates of various shapes.
public Shape(int x, int y) { this.x = x; this.y = y; }
The constructor of the
Shape class initiates the
x and
y coordinates with the given parameters.
protected int getX() { return this.x; } protected int getY() { return this.y; }
We have defined two methods to retrieve the values of the coordinates. The members are private, so the only access possible is through methods.
class Circle extends Shape { private int r; ... }
The
Circle class inherits from the
Shape class. It defines
the
radius member which is specific to this shape.
public Circle(int r, int x, int y) { super(x, y); this.r = r; }
The first constructor of the
Circle class takes three parameters: the
radius, and the
x and
y coordinates. With the
super
keyword, we call the parent's constructor passing the coordinates. Note that the
super keyword must be the first statement in the constructor. The second
statement initiates the
radius member of the
Circle class.
public Circle() { this(1, 1, 1); }
The second constructor takes no parameters. In such a case, we provide some
default values. The
this keyword is used to call the three-parameter
constructor of the same class, passing three default values.
@Override public String toString() { return String.format("Circle: r:%d, x:%d, y:%d", r, getX(), getY()); }
Inside the
toString() method, we provide a string representation
of the
Circle class. To determine the
x and
y
coordinates, we use the inherited
getX() and
getY() methods.
$ java com.zetcode.ConstructorChaining Circle: r:5, x:10, y:10 Circle: r:1, x:1, y:1
This is the output of the example.
Java class constants
It is possible to create class constants. These constants do not belong to a concrete object. They belong to the class. By convention, constants are written in uppercase letters.
package com.zetcode; class Math { public static final double PI = 3.14159265359; } public class ClassConstant { public static void main(String[] args) { System.out.println(Math.PI); } }
We have a
Math class with a
PI constant.
public static final double PI = 3.14159265359;
The
final keyword is used to define a constant. The
static keyword enables to refer the member without creating
an instance of the class. The
public
keyword makes it accessible outside the body of the class.
$ java com.zetcode.ClassConstant 3.14159265359
Running the example we get the above output.
Java toString method
Each object has the
toString() method. It returns a
human-readable representation of an object. The default implementation
returns the fully qualified name of the type of the
Object.
When we call the
System.out.println() method with an
object as a parameter, the
toString() is being called.
package com.zetcode; class Being { @Override public String toString() { return "This is Being class"; } } public class ThetoStringMethod { public static void main(String[] args) { Being b = new Being(); Object o = new Object(); System.out.println(o.toString()); System.out.println(b.toString()); System.out.println(b); } }
We have a
Being class in which we override the default implementation
of the
toString() method.
@Override public String toString() { return "This is Being class"; }
Each class created inherits from the base
Object.
The
toString() method belongs to this object class.
The
@Override annotation informs the compiler that the
element is meant to override an element declared in a superclass.
The compiler will then check that we did not create any error.
Being b = new Being(); Object o = new Object();
We create two objects: one custom defined and one built-in.
System.out.println(o.toString()); System.out.println(b.toString());
We call the
toString() method explicitly on these two objects.
System.out.println(b);
As we have specified earlier, placing an object as a parameter to the
System.out.println() will call its
toString()
method. This time, we have called the method implicitly.
$ java com.zetcode.ThetoStringMethod java.lang.Object@125ee71 This is Being class This is Being class
This is what we get when we run the example.
Inheritance in Java base classes (ancestors).
package com.zetcode; class Being { public Being() { System.out.println("Being is created"); } } class Human extends Being { public Human() { System.out.println("Human is created"); } } public class Inheritance { @SuppressWarnings("ResultOfObjectAllocationIgnored") public static void main(String[] args) { new Human(); } }
In this program, we have two classes: a base
Being class
and a derived
Human class. The derived class inherits from
the base class.
class Human extends Being {
In Java, we use the
extends keyword to create inheritance
relations.
new Human();
We instantiate the derived
Human class.
$ java com.zetcode.Inheritance Being is created Human is created
We can see that both constructors were called. First, the constructor of the base class is called, then the constructor of the derived class.
A more complex example follows.
package com.zetcode; class Being { static int count = 0; public Being() { count++; System.out.println("Being is created"); } public void getCount() { System.out.format("There are %d Beings%n", count); } } class Human extends Being { public Human() { System.out.println("Human is created"); } } class Animal extends Being { public Animal() { System.out.println("Animal is created"); } } class Dog extends Animal { public Dog() { System.out.println("Dog is created"); } } public class Inheritance2 { @SuppressWarnings("ResultOfObjectAllocationIgnored") public static void main(String[] args) { new Human(); Dog dog = new Dog(); dog.getCount(); } }
With four classes, the inheritance hierarchy is more complicated. The
Human and the
Animal classes inherit from the
Being class and the
Dog class inherits
directly from the
Animal class and indirectly from the
Being class.
static int count = 0;
We define a
static variable. Static members
are shared by all instances of a class.
public Being() { count++; System.out.println("Being is created"); }
Each time the
Being class is instantiated, we increase the count
variable by one. This way we keep track of the number of instances
created.
class Animal extends Being { ... class Dog extends Animal { ...
The
Animal inherits from the
Being and the
Dog
inherits from the
Animal. Indirectly, the
Dog inherits from
the
Being as well.
new Human(); Dog dog = new Dog(); dog.getCount();
We create instances from the
Human and from the
Dog classes. We call the
getCount() method of
the
Dog object.
$ java com.zetcode.Inheritance2 Being is created Human is created Being is created Animal is created Dog is created There are 2 Beings
The
Human object calls two constructors. The
Dog object
calls three constructors. There are two
Beings instantiated.
Final class, private constructor
A class with a
final modifier cannot be subclassed. A class with
a constructor that has a
private modifier cannot be instantiated.
package com.zetcode; final class MyMath { public static final double PI = 3.14159265358979323846; // other static members and methods } public class FinalClass { public static void main(String[] args) { System.out.println(MyMath.PI); } }
We have a
MyMath class. This class has some static members and
methods. We do not want anyone to inherit from our class; therefore, we
declare it to be
final.
Furthermore, we also do not want to allow creation of instances from our class. We decide it to be used only from a static context. Declaring a private constructor, the class cannot be instantiated.
package com.zetcode; final class MyMath { private MyMath() {} public static final double PI = 3.14159265358979323846; // other static members and methods } public class PrivateConstructor { public static void main(String[] args) { System.out.println(MyMath.PI); } }
Our
MyMath class cannot be instantiated and cannot be subclassed.
This is how
java.lang.Math is designed in Java language.
This was the first part of the description of OOP in Java. | https://zetcode.com/lang/java/oop/ | CC-MAIN-2021-21 | refinedweb | 3,815 | 51.85 |
The code for this tutorial can be downloaded here: threadworms.py or from GitHub. This code works with Python 3 or Python 2, and you need Pygame installed as well in order to run it.
Click the animated gif to view a larger version.
This is a tutorial on threads and multithreaded programs in Python, aimed at beginning programmers. It helps if you know the basics of classes (what they are, how you define methods, and that methods always have self as the first parameter, what subclasses (i.e. child classes) are and how a method can be inherited from a parent class, etc.) Here's a more in-depth classes tutorial.
The example used is a "Nibbles" or "Snake" style clone that has multiple worms running around a grid-like field, with each worm running in a separate thread.
What are threads and why are they useful?
You can skip this section if you already know what threads are and just want to see how to use them in Python.
When you run a normal Python program, the program execution starts at the first line and goes down line by line. Loops and function calls may cause the program execution to jump around, but it is fairly easy to see from the code which line will get executed next at any given point. You can put your finger on the first line of code in the .py file on the screen, and then trace through the next lines of code that are executed. This is single-threaded programming.
However, using multiple threads is like putting a second finger down on your code. Each finger still moves the same way, but now they are executing code simultaneously.
Actually, they aren't executing simultaneously. Your two fingers are taking turns at which one executes code. Computers with multicore CPUs can actually run multiple instructions simultaneously, but there is a feature of Python programs called the GIL (Global Interpreter Lock) that limits a Python program to one core only.
The Python interpreter will run one thread for a while, and then pause it to run another thread for a while. But it does this so fast that it seems like they are running simultaneously.
You can start dozens or hundreds of threads in your program (that's a lot of fingers). This doesn't automatically make your programs dozens or hundreds of times faster though (all the threads are still sharing the same CPU) but it can make your program more efficient.
For example, say you write a function that will download a file full of names, then sorts the names, and then writes these names to a file on your computer. If there are hundreds of files your program needs to process, you would put a call to this function in a loop and it would handle each file serially: download, sort, write, download, sort, write, download, sort, write...
Each of these three steps use different resources on your computer: downloading uses the network connection, sorting uses the CPU, writing the file uses the hard drive. Also, there are tiny pauses within each of these steps. For example, the server you are downloading the file from may be slow and your computer's Internet connection has bandwidth to spare.
It would be better if you could call this function hundreds of times in parallel by using one thread for each file. Not only would this make better use of your bandwidth, but if some files download sooner than others, the CPU can be used to sort them while the network connection continues to work. This makes more efficient use of your computer.
What makes multithreaded programming tricky?
Of course, in the above case, each thread is doing its own separate thing and doesn't need to communicate or synchronize anything with the other threads. You could just write the simple single-threaded version of the download-sort-write program and run the program hundreds of times separately. (Though it might be a pain to type & click each time to run the program each with a different file to download.)
Many multithreaded programs share access to the same variables, but this is where things can get tricky.
Photo from Brad Montgomery)
Here's a common metaphor that is used: Say you have two robot ticket sellers. Their tasks are simple:
- Ask the customer which seat they want.
- Check a list to see if the seat is available.
- Get the ticket for that seat.
- Cross that seat off the list.
A customer asks Robot A for seat 42. Robot A checks that the seat is available from the list and finds that it is, so it grabs the ticket. But before Robot A can cross the seat off the list, Robot B is asked by a different customer for seat 42. Robot B checks the list and sees that the seat is still available, so it tries to grab the ticket for the seat. But Robot B can't find the ticket for seat 42. THIS DOES NOT COMPUTE, and Robot B's electronic brain explodes. Robot A then crosses seat 42 off of the list.
The above problem happens because although the two robots (or rather, two threads) are executing independently, they are both reading and modifying a shared list (or rather, a variable). Your programs can get very hard-to-fix bugs which are also difficult to even reproduce, since Python's thread execution switching is nondeterministic, that is, done differently each time the program is run. We aren't used to having the data in variables "magically" change from one line to the next just because a different thread was executed in between them.
When the execution switches from one thread to another, this is known as a context switch.
There is also the problem of deadlocks, which is commonly explained using the metaphor of the Dining Philosophers. Five philosophers are sitting around a circular table eating spaghetti but require two forks to do so. There is one fork between each philosopher (for a total of five forks). The method the philosophers use to eat is this:
- Philosophize for a while.
- Pick up the fork on your left.
- Wait until the fork on your right is available.
- Pick up the fork on your right.
- Eat.
- Put the forks down.
- Go back to step 1.
Aside from the fact that they'll be sharing forks with their neighbors (eww), it seems like this method will work. But sooner or later everyone at the table will end up with the fork on their left in their hand and waiting for the fork on their right. But because everyone is holding on to the fork their neighbor is waiting for and won't put it down until they've eaten, the philosophers are in a deadlock state. They will be holding forks in their left hand but never getting a fork in their right hand, so they never eat and never put down the fork in their left hand. The philosophers all starve to death (except for Voltaire who is actually a robot. Without spaghetti, his electronic brain explodes.)
There is also a similar situation called a livelock. This is when no work gets done because the threads are too generous at making a resource available. The best metaphor of this is when two people are walking towards each other down a hall. They step to the side to let the other person walk past, but end up blocking each other. So they both step back to the other side, but end up blocking each other again. They continue doing this until they starve/electronic-brain-explode.
There are a few other problems that can come up with multithreaded programming such as starvation (no seriously, that's what it is called) and generally fall under the label of "Concurrency" in computer science. But we will only treat a simplified case.
Locks
One way to prevent bugs with multithreaded programming is by using locks. Before a thread reads or modifies a shared variable, it attempts to "acquire" a lock. If it can acquire the lock, the thread goes on to read or modify the variable. If the thread cannot acquire the lock, it waits until the lock becomes available.
When the thread is done with the shared variable, it will "release" the lock so that some other thread waiting for the lock can acquire it.
Going back to our robot ticket seller metaphor, this is like having a robot pick up the list (the list is a "lock"), and then reading it the ticket is available, grabbing the ticket, and then crossing out the seat on the list. When the robot puts the list back down, it is "releasing the lock". If another robot needs to pick up the list but it is not there, it will wait until the list is available.
You can cause bugs by writing code that forgets to release a lock. This will cause a deadlock situation since the other threads will hang and do nothing while waiting for a lock to bereleased.
Threads in Python
Okay, let's write a Python program that demonstrates how to use threads and locks. This program is based off of my "Snake" clone in Chapter 6 of my Making Games with Python & Pygame book. Except instead of a worm running around eating apples, we'll just have the worm running around the screen. And instead of just one worm, we will have multiple worms. Each worm will be controlled by a separate thread. The shared variable will have the data structure that represents which places on the screen (called "cells" in this program) are occupied by a worm. A worm cannot move forward to occupy a cell if another worm is already there. We will use locks to ensure that the worms don't occupy the same cell as another worm.
The code for this tutorial can be downloaded here: threadworms.py or from GitHub. This code works with Python 3 or Python 2, and you need Pygame installed as well in order to run it.
Here's a summary of the thread-related code in our threadworms.py program:
import threading
Python's thread library is in a module named
threading, so first import this module.
GRID_LOCK = threading.Lock()
The class Lock in the threading module has
acquire() and
release() methods. We will create a new
Lock object and store it in a global variable named
GRID_LOCK. (Since the state of the grid-like screen and which cells are occupied is stored in a global variable named
GRID. The pun was unintended.)
# A global variable that the Worm threads check to see if they should exit. WORMS_RUNNING = True
Our
WORMS_RUNNING global variable is regularly checked by the worm threads to see if they should quit. Calling
sys.exit() will not stop the program because it only quits the thread that made the call. As long as there are other threads still running the program will continue. The main thread in our program (which handles the Pygame drawing and event handling) will set
WORMS_RUNNING to
False before it calls
pygame.quit() and
sys.exit(). The next time a thread checks
WORMS_RUNNING, it will quit, until eventually the last thread quits and then the program terminates.
class Worm(threading.Thread): def __ init__(self, name='Worm', maxsize=None, color=None, speed=None): threading.Thread.__init__(self) self.name = name
The thread's code must start from a class that is a child of the
Thread class (which is in the
threading module). Our
Thread subclass will be named
Worm since it controls You don't need an
__init__() function, but since our Worm classes uses one we need to call the
threading.Thread class's
__init__() method first. Also optional is to override the name member. Our
__init__() function uses the string
'Worm' by default, but we can supply each thread with a unique name. Python will display the thread's name in the error message if it crashes.
GRID_LOCK.acquire() # ...some code that reads or modifies GRID... GRID_LOCK.release()
Before we read or modify the value in the
GRID variable, the thread's code should attempt to acquire the lock. If the lock isn't available, the method call to
acquire() will not return and instead "block" until the lock becomes available. The thread is paused while this happens. This way, we know that the code after the
acquire() call will only happen if the thread has acquired the lock.
Acquiring and releasing a lock around a bit of code ensures that another thread does not execute this code while the current thread is. This makes the code atomic because the code is always executed as a single unit.
After the thread's code is done with the
GRID variable, the lock can be released by calling the
release() method.
def run(self): # thread code goes here.
A thread starts when the
Worm class (which is a subclass of
threading.Thread) has its
start() method called. We don't have to implement
start() in the
Worm class because it is inherited from the
threading.Thread class. When the
start() method is called, a new thread is created and the code inside the
run() method is executed in this new thread. Do not call the
run() method directly, as this won't create the new thread.
This is important to know: to start the thread call the
start() method, but the code that gets run in the new thread is in
run(). We don't have to define
start() because it is inherited from
threading.Thread. We do need to define
run() since that is where our thread's code will go.
When the
run() method returns (or
sys.exit() is called in the thread), the thread will be destroyed. All threads in a program must be destroyed before the program terminates. The program will still be running as long as there is one running thread.
So when
start() is called, this is when you would place your second finger on the source code in
run() to start tracing the code. Your first finger will continue tracing the code after the line that has the
start() call.
A Simple Multithreaded Example
Before we go into the Threadworm code, let's just look at a dead simple multithreaded program:
import threading TOTAL = 0 class CountThread(threading.Thread): def run(self): global TOTAL for i in range(100): TOTAL = TOTAL + 1 print('%s\n' % (TOTAL)) a = CountThread() b = CountThread() a.start() b.start()
This program defines a new class called
CountThread. When a
CountThread object's
start() method is called, a new thread is created which will loop 100 times and increment the
TOTAL global variable (which is shared between the variables) by
1 on each iteration of the loop.
Since we are creating two
CountThread objects, whichever one finishes last should display
200. Each thread increases
TOTAL by
100 and there are two threads. When we run this program, that's what we see:
100 200
Because the first number is
100, we can tell that probably what happened is that one thread ran through the entire loop before a context switch happened.
However, if we change
range(100) to
range(100000), we would expect the second number to be
200000, since each thread increases
TOTAL by
100000 and there are two threads. But when we run the program, something like this appears (your numbers may be slightly different):
143294 149129
That second number is not
200000! It's quite less than that actually. The reason this happened is because we did not use locks around the code the reads and modifies the
TOTAL variable, which is shared among multiple threads.
Look at this line:
TOTAL = TOTAL + 1
If
TOTAL was set to
99, then you would expect
TOTAL + 1 to evaluate to
99 + 1 and then to
100, and then
100 is stored as the new value in
TOTAL. Then on the next iteration,
TOTAL + 1 would be
100 + 1 or
101, which is stored as the new value in
TOTAL.
But say when
TOTAL + 1 gets evaluated as
99 + 1, the execution switches to the other thread, which is also about to execute the
TOTAL = TOTAL + 1 line. The value in
TOTAL is still
99, so
TOTAL + 1 in this second thread gets evaluated to
99 + 1.
Then, another context switch happens back to the first thread where TO
TAL = 99 + 1 is in the middle of being executed. The integer
100 is assigned to
TOTAL. Now execution switches back to the second thread again.
In this second thread,
TOTAL = 99 + 1 is about to be executed. Even though
TOTAL is now
100, the
TOTAL + 1 in this second thread has already been evaluated as
99 + 1. So the second thread also ends up assigning the integer
100 to
TOTAL. Even though this
TOTAL = TOTAL + 1 has been executed twice (once by each thread), the value in
TOTAL has really only been incremented by
1!
The problem is, the line of code
TOTAL = TOTAL + 1 is not atomic. The context switch can happen right in the middle of the line being executed. We need to use locks around this code to make this an atomic operation.
This new code fixes this problem:
import threading TOTAL = 0 MY_LOCK = threading.Lock() class CountThread(threading.Thread): def run(self): global TOTAL for i in range(100000): MY_LOCK.acquire() TOTAL = TOTAL + 1 MY_LOCK.release() print('%s\n' % (TOTAL)) a = CountThread() b = CountThread() a.start() b.start()
When we run this code, this is what is outputted (your first number might be a little different):
199083 200000
That the second number is
200000 tells us that the
TOTAL = TOTAL + 1 line was correctly executed each of the 200,000 times it was run.
Explaining the Threadworms Program
I'm going to use the threadworms_nocomments.py version of the program since it doesn't have the very verbose comments in it. The line numbers have been included at the front of each line (they are not a part of the actual Python source code). I skip a lot of the commented sections because they are self-explanatory. You don't really need to know Pygame to follow this code. Pygame is only responsible for creating the window and drawing the lines and rectangles on it.
One thing to know is that Pygame uses a tuple of three integers to represent colors. The integers each span from
0 to
255 and represent the RGB (Red-Green-Blue) value of the color. So
(0, 0, 0) is black and
(255, 255, 255) is white and
(255, 0, 0) is red and
(255, 0, 255) is purple, etc.
9. import random, pygame, sys, threading 10. from pygame.locals import * 11. 12. # Setting up constants 13. NUM_WORMS = 24 # the number of worms in the grid 14. FPS = 30 # frames per second that the program runs 15. CELL_SIZE = 20 # how many pixels wide and high each "cell" in the grid is 16. CELLS_WIDE = 32 # how many cells wide the grid is 17. CELLS_HIGH = 24 # how many cells high the grid is
The top part of the code imports some modules our program needs and defines some constant values. Feel free to edit these constant values. Increasing or decreasing the
FPS value doesn't change how fast the worms run around, it just changes how often the screen updates. If you set this value very low, it looks like the worms are teleporting since they move multiple spaces in between screen updates.
CELL_SIZE is how big each square on the screen's grid is (in pixels). If you want to change the number of cells, modify the
CELLS_WIDE and
CELLS_HIGH constants.
20. GRID = [] 21. for x in range(CELLS_WIDE): 22. GRID.append([None] * CELLS_HIGH)
The
GRID global variable will contain data that tracks the state of the grid. It is a simple list of lists so that
GRID[x][y] will refer to the cell at the X and Y coordinate. (In programming, the (0, 0) origin is at the top-left corner of the screen. X increases going to the right (just like in mathematics classes) but Y increases going down.)
If
GRID[x][y] is set to None, then that cell is unoccupied. Otherwise,
GRID[x][y] will be set to an RGB triplet. (This information is used when drawing the grid to the screen.)
24. GRID_LOCK = threading.Lock() # pun was not intended
Line 24 creates a Lock object which our threads' code will acquire and release before reading or modifying
GRID.
26. # Constants for some colors. 27. # R G B 28. WHITE = (255, 255, 255) 29. BLACK = ( 0, 0, 0) 30. DARKGRAY = ( 40, 40, 40) 31. BGCOLOR = BLACK # color to use for the background of the grid 32. GRID_LINES_COLOR = DARKGRAY # color to use for the lines of the grid RGB tuples are kind of hard to read, so I usually set up some constants for them. 33. 34. # Calculate total pixels wide and high that the full window is 35. WINDOWWIDTH = CELL_SIZE * CELLS_WIDE 36. WINDOWHEIGHT = CELL_SIZE * CELLS_HIGH 37. 38. UP = 'up' 39. DOWN = 'down' 40. LEFT = 'left' 41. RIGHT = 'right'
Some more simple constants. I use constants like
DOWN and
RIGHT instead of strings like
'down' and
'right' because if I make a typo using constants (i.e.
DWON) then Python will immediately crash with a
NameError exception. This is much better than if I make a typo like
'dwon' which won't immediately crash the program will cause bugs later on, making it more difficult to track down.
43. HEAD = 0 44. BUTT = -1 # negative indexes count from the end, so -1 will always be the last index
Each worm will be represent by a list of dictionaries like
{'x': 42, 'y': 7}. Each of these dictionaries represents a single body segment of the worm. The dictionary at the front of the list (at index
0) is the head and the dictionary at the end of the list (at index
-1, using Python's nice negative indexing which begins counting from the end) is the butt of the worm.
(In computer science, "head" often refers to the first item in a queue or list, and "tail" refers to every item after the head. So I use "butt" to refer to just the last item. Also, I am silly.)
The above worm would be represented with a list that looks like this:
[{'x': 7, 'y': 2}, {'x': 7, 'y': 3}, {'x': 7, 'y': 4}, {'x': 8, 'y': 4}, {'x': 9, 'y': 4}, {'x': 10, 'y': 4}, {'x': 11, 'y': 4}, {'x': 11, 'y': 3}, {'x': 11, 'y': 2}]
46. # A global variable that the Worm threads check to see if they should exit. 47. WORMS_RUNNING = True
As long as one thread is running, the program will continue to execute. The main thread that does the screen drawing will also detect when the user has clicked the close button on the window or pressed the Esc key, so it needs a way to tell the worm threads to quit. We will code the worm threads to constantly check
WORMS_RUNNING. If
WORMS_RUNNING is set to
False, then the thread will terminate itself.
49. class Worm(threading.Thread): # "Thread" is a class in the "threading" module. 50. def __init__(self, name='Worm', maxsize=None, color=None, speed=None):
Here's our
Worm class. It is a child class of the
threading.Thread class. Each worm can have a name (which appears if the thread crashes, helping us identify which thread crashed), and a size, color, and speed. Default values are provided, but we can specify these ourselves if we want.
56. threading.Thread.__init__(self) # since we are overriding the Thread class, we need to first call its __init__() method.
Since we are overriding the
__init__() method, we need to call the parent classes
__init__() method so that it can initialize all the thread stuff. (We don't need to know how it works, just remember to call it.)
57. 58. self.name = name 59. 60. # Set the maxsize to the parameter, or to a random maxsize. 61. if maxsize is None: 62. self.maxsize = random.randint(4, 10) 63. 64. # Have a small chance of a super long worm. 65. if random.randint(0,4) == 0: 66. self.maxsize += random.randint(10, 20) 67. else: 68. self.maxsize = maxsize 69. 70. # Set the color to the parameter, or to a random color. 71. if color is None: 72. self.color = (random.randint(60, 255), random.randint(60, 255), random.randint(60, 255)) 73. else: 74. self.color = color 75. 76. # Set the speed to the parameter, or to a random number. 77. if speed is None: 78. self.speed = random.randint(20, 500) # wait time before movements will be between 0.02 and 0.5 seconds 79. else: 80. self.speed = speed
The above code sets up a worm with random values for the size, color, and speed unless specific values were specified for the parameters.
82. GRID_LOCK.acquire() # block until this thread can acquire the lock 83. 84. while True: 85. startx = random.randint(0, CELLS_WIDE - 1) 86. starty = random.randint(0, CELLS_HIGH - 1) 87. if GRID[startx][starty] is None: 88. break # we've found an unoccupied cell in the grid 89. 90. GRID[startx][starty] = self.color # modify the shared data structure 91. 92. GRID_LOCK.release()
We need to determine a random starting location for the worm. To make this easier, all worms begin with a length of one body segment and grow until they reach their full maximum size. But we need to make sure that the random location on the grid we come up with isn't already occupied. This involves reading and modifying the
GRID global variable, so we need to acquire and release the
GRID_LOCK lock before doing this.
(As a side note, you might be wondering why we don't have a "global GRID" line at the beginning of this method.
GRID is a global variable and we are modifying it in this method, and without a
global statement Python should consider this a local variable that just happens to have the same name as the
GRID global variable. But if you look closer, we only change values inside the
GRID list of lists, but never the value in
GRID itself. That is, we have code that looks like "
GRID[startx][starty] = self.color" but never "
GRID = someValue". Because we don't actually modify
GRID itself, Python considers the use of the variable name
GRID in this method to refer to the global variable
GRID.)
We keep looping until we've found an unoccupied cell, and then mark that cell as now occupied. After this, we are done reading and modifying
GRID so we release the
GRID_LOCK lock.
(Another side note, if there are no free cells on the grid, this loop will continue to loop forever and the thread will "hang". Since the other threads will continue to run, you might not notice this problem. The new worm will not be created but the rest of the program continues to run normally. However, when you try to quit, since the hanging thread never gets to check
WORMS_RUNNING to know it should quit and the program will refuse to terminate. You will have to force the program to shut down through your operating system. Just be sure not to add more worms than you have space for.)
96. self.body = [{'x': startx, 'y': starty}] 97. self.direction = random.choice((UP, DOWN, LEFT, RIGHT))
The starting body segment is added to the
body member variable. The
body member variable will be a list of all the locations of segments of the body. The direction that the worm is heading in is stored in the
direction member variable.
Technically, since this worm right now only has one body segment that is both the first and last item in the list, the worm's head is the same as its butt.
100. def run(self): 101. while True: 102. if not WORMS_RUNNING: 103. return # A thread terminates when run() returns.
The
run() method is the method that is called when the worm's
start() method is called. The code in
run() is executed in a brand new thread. We will have an infinite loop that causes the worm to continuously move around the grid. The first thing we do on each iteration of the loop is check if
WORMS_RUNNING is set to
False, and if so, we should return from this method.
The thread will terminate itself if we either call
sys.exit() from the thread or when the
run() method returns.
105. # Randomly decide to change direction 106. if random.randint(0, 100) < 20: # 20% to change direction 107. self.direction = random.choice((UP, DOWN, LEFT, RIGHT))
On each move, there's a 20% chance that the worm randomly changes direction. (Although the new direction could be the same as the current direction. But I wanted to write this code out quickly.)
109. GRID_LOCK.acquire() # don't return (that is, block) until this thread can acquire the lock 110. 111. nextx, nexty = self.getNextPosition() 112. if nextx in (-1, CELLS_WIDE) or nexty in (-1, CELLS_HIGH) or GRID[nextx][nexty] is not None: 113. # The space the worm is heading towards is taken, so find a new direction. 114. self.direction = self.getNewDirection() 115. 116. if self.direction is None: 117. # No places to move, so try reversing our worm. 118. self.body.reverse() # Now the head is the butt and the butt is the head. Magic! 119. self.direction = self.getNewDirection() 120. 121. if self.direction is not None: 122. # It is possible to move in some direction, so reask for the next postion. 123. nextx, nexty = self.getNextPosition() 124. 125. if self.direction is not None: 126. # Space on the grid is free, so move there. 127. GRID[nextx][nexty] = self.color # update the GRID state 128. self.body.insert(0, {'x': nextx, 'y': nexty}) # update this worm's own state 129. 130. # Check if we've grown too long, and cut off tail if we have. 131. # This gives the illusion of the worm moving. 132. if len(self.body) > self.maxsize: 133. GRID[self.body[BUTT]['x']][self.body[BUTT]['y']] = None # update the GRID state 134. del self.body[BUTT] # update this worm's own state (heh heh, worm butt) 135. else: 136. self.direction = random.choice((UP, DOWN, LEFT, RIGHT)) # can't move, so just do nothing for now but set a new random direction 137. 138. GRID_LOCK.release()
The above code handles moving the worm one space. Since this involves reading and modifying
GRID, we need to acquire the
GRID_LOCK lock first. Essentially, the worm will try to move one space in the direction that it's direction member variable says. If this cell is beyond the border of the grid or is already occupied, then the worm will change its direction. If the worm is blocked on all sides, then the worm reverses itself so that the butt becomes the head and the head becomes the butt. If the worm still can't move in any direction, then it will just stay put for now.
140. pygame.time.wait(self.speed)
After the worm has moved one space (or at least tried to), we will put the thread to sleep. Pygame has a function called
wait() that does the same thing as
time.sleep(), except that the argument to
wait() is in integer of milliseconds instead of seconds.
Pygame's
pygame.time.wait() and the Python Standard Library's
time.time() functions (and Pygame's
tick() method) are smart enough to tell the operating system to put the thread to sleep for a while and just run other threads instead. Of course, while the OS could interrupt our thread at any time to hand execution off to a different thread, calling
wait() or
sleep() is a way we can explicitly say, "Go ahead and don't run this thread for X milliseconds."
This wouldn't happen if we have "wait" code like this:
startOfWait = time.time() while time.time() - 5 > startOfWait: pass # do nothing for 5 seconds
The above code also implements "waiting", but to the OS it looks like your thread is still executing code (even though this code is doing nothing but looping until 5 seconds has passed). This is inefficient, because time spent executing the above pointless loop is time that could have been spent executing other thread's code.
Of course, if ALL worms' threads are sleeping, then the computer can know it can use the CPU to run other programs besides our Python Threadworms script.
143. def getNextPosition(self): 144. # Figure out the x and y of where the worm's head would be next, based 145. # on the current position of its "head" and direction member. 146. 147. if self.direction == UP: 148. nextx = self.body[HEAD]['x'] 149. nexty = self.body[HEAD]['y'] - 1 150. elif self.direction == DOWN: 151. nextx = self.body[HEAD]['x'] 152. nexty = self.body[HEAD]['y'] + 1 153. elif self.direction == LEFT: 154. nextx = self.body[HEAD]['x'] - 1 155. nexty = self.body[HEAD]['y'] 156. elif self.direction == RIGHT: 157. nextx = self.body[HEAD]['x'] + 1 158. nexty = self.body[HEAD]['y'] 159. else: 160. assert False, 'Bad value for self.direction: %s' % self.direction 161. 162. return nextx, nexty
The
getNextPosition() figures out where the worm will go next given the position of its head and the direction it is going.
165. def getNewDirection(self): 166. x = self.body[HEAD]['x'] # syntactic sugar, makes the code below more readable 167. y = self.body[HEAD]['y'] 168. 169. # Compile a list of possible directions the worm can move. 170. newDirection = [] 171. if y - 1 not in (-1, CELLS_HIGH) and GRID[x][y - 1] is None: 172. newDirection.append(UP) 173. if y + 1 not in (-1, CELLS_HIGH) and GRID[x][y + 1] is None: 174. newDirection.append(DOWN) 175. if x - 1 not in (-1, CELLS_WIDE) and GRID[x - 1][y] is None: 176. newDirection.append(LEFT) 177. if x + 1 not in (-1, CELLS_WIDE) and GRID[x + 1][y] is None: 178. newDirection.append(RIGHT) 179. 180. if newDirection == []: 181. return None # None is returned when there are no possible ways for the worm to move. 182. 183. return random.choice(newDirection)
The
getNewDirection() method returns a direction (one of the
UP,
DOWN,
LEFT, or
RIGHT strings) that is for an unoccupied cell within the grid. If there are no available cells the head could move towards, the method returns None.
185. def main(): 186. global FPSCLOCK, DISPLAYSURF 187. 188. # Draw some walls on the grid 189. squares = """ 190. ........................... 191. ........................... 192. ........................... 193. .H..H..EEE..L....L.....OO.. 194. .H..H..E....L....L....O..O. 195. .HHHH..EE...L....L....O..O. 196. .H..H..E....L....L....O..O. 197. .H..H..EEE..LLL..LLL...OO.. 198. ........................... 199. .W.....W...OO...RRR..MM.MM. 200. .W.....W..O..O..R.R..M.M.M. 201. .W..W..W..O..O..RR...M.M.M. 202. .W..W..W..O..O..R.R..M...M. 203. ..WW.WW....OO...R.R..M...M. 204. ........................... 205. ........................... 206. """ 207. #setGridSquares(squares)
The
setGridSquares() function can be used to draw static blocks on the grid by passing a multiline string. The period characters represent no change, a space character means "set this to be unoccupied" and any other character will represent a static block to place on the grid. You can uncomment line 207 if you want to see the "Hello worm" text written out in blocks.
209. # Pygame window set up. 210. pygame.init() 211. FPSCLOCK = pygame.time.Clock() 212. DISPLAYSURF = pygame.display.set_mode((WINDOWWIDTH, WINDOWHEIGHT)) 213. pygame.display.set_caption('Threadworms')
This is standard Pygame setup code to create a window for our program.
215. # Create the worm objects. 216. worms = [] # a list that contains all the worm objects 217. for i in range(NUM_WORMS): 218. worms.append(Worm()) 219. worms[-1].start() # Start the worm code in its own thread.
This code creates the
Worm objects and then creates their threads by calling the
start() method. The code in each worm's
run() method will begin executing in a separate thread at this point.
221. while True: # main game loop 222. handleEvents() 223. drawGrid() 224. 225. pygame.display.update() 226. FPSCLOCK.tick(FPS)
The main game loop is pretty simple. The
handleEvents() function will be checking if the user is terminating the program and the
drawGrid() function will draw the grid lines and cells to the screen. The
pygame.display.update() function tells the window to update the screen, after which the
tick() method will pause for however long is needed to achieve the framerate specified in
FPS.
229. def handleEvents(): 230. # The only event we need to handle in this program is when it terminates. 231. global WORMS_RUNNING 232. 233. for event in pygame.event.get(): # event handling loop 234. if (event.type == QUIT) or (event.type == KEYDOWN and event.key == K_ESCAPE): 235. WORMS_RUNNING = False # Setting this to False tells the Worm threads to exit. 236. pygame.quit() 237. sys.exit()
The Pygame events can tell us when the user has pressed the Esc key or clicked on the close button for the window. In this case we want to set
WORMS_RUNNING to
False so that the threads will terminate themselves and then the main thread shuts down Pygame and exits.
240. def drawGrid(): 241. # Draw the grid lines. 242. DISPLAYSURF.fill(BGCOLOR) 243. for x in range(0, WINDOWWIDTH, CELL_SIZE): # draw vertical lines 244. pygame.draw.line(DISPLAYSURF, GRID_LINES_COLOR, (x, 0), (x, WINDOWHEIGHT)) 245. for y in range(0, WINDOWHEIGHT, CELL_SIZE): # draw horizontal lines 246. pygame.draw.line(DISPLAYSURF, GRID_LINES_COLOR, (0, y), (WINDOWWIDTH, y))
This code draws the screen based on the values in
GRID. But first it draws the grid lines.
248. # The main thread that stays in the main loop (which calls drawGrid) also 249. # needs to acquire the GRID_LOCK lock before modifying the GRID variable. 250. GRID_LOCK.acquire() 251. 252. for x in range(0, CELLS_WIDE): 253. for y in range(0, CELLS_HIGH): 254. if GRID[x][y] is None: 255. continue # No body segment at this cell to draw, so skip it 256. 257. color = GRID[x][y] # modify the GRID data structure 258. 259. # Draw the body segment on the screen 260. darkerColor = (max(color[0] - 50, 0), max(color[1] - 50, 0), max(color[2] - 50, 0)) 261. pygame.draw.rect(DISPLAYSURF, darkerColor, (x * CELL_SIZE, y * CELL_SIZE, CELL_SIZE, CELL_SIZE )) 262. pygame.draw.rect(DISPLAYSURF, color, (x * CELL_SIZE + 4, y * CELL_SIZE + 4, CELL_SIZE - 8, CELL_SIZE - 8)) 263. 264. GRID_LOCK.release() # We're done messing with GRID, so release the lock.
Because this code reads the
GRID variable, we will first acquire the
GRID_LOCK lock. If a cell is occupied (that is, it is set to an RGB tuple value inside the
GRID variable) the code draws in the cell.
267. def setGridSquares(squares, color=(192, 192, 192)): 268. # squares is set to a value like: 269. # """ 270. # ...... 271. # ...XX. 272. # ...XX. 273. # ...... 274. # """ 275. 276. squares = squares.split('\n') 277. if squares[0] == '': 278. del squares[0] 279. if squares[-1] == '': 280. del squares[-1] 281. 282. GRID_LOCK.acquire() 283. for y in range(min(len(squares), CELLS_HIGH)): 284. for x in range(min(len(squares[y]), CELLS_WIDE)): 285. if squares[y][x] == ' ': 286. GRID[x][y] = None 287. elif squares[y][x] == '.': 288. pass 289. else: 290. GRID[x][y] = color 291. GRID_LOCK.release()
The setGridSquares() can write static blocks to the grid and was explained previously.
294. if __name__ == '__main__': 295. main()
The above is a Python trick. Instead of putting the main code in the global scope, we put it into a function named
main() which is called from the bottom. This guarantees that all the functions have been defined before the code in
main() runs. The
__name__ variable is only set to the string
'__main__' if this program was run itself, as opposed to imported as a module by another program.
Summary
That's it! Multithreaded programming is fairly simple to explain, but it can be tricky to understand how to get your own multithreaded programs to work correctly. The best way to learn is to practice by writing your own programs.
Actually, the way we have our code set up, even if we got rid of the locks it would still run almost perfectly. Nothing would crash, although there would sometimes be the case where two worms are approaching the same cell and end up both occupying it. They would then seemingly move through each other. Using locks ensures that only one worm can occupy a cell at any time.
Good luck!
8 thoughts on “Multithreaded Python Tutorial with the "Threadworms" Demo”
Someone on Reddit correctly pointed out that this single lock for GRID is inefficient, since one worm acquiring the lock blocks everyone else from moving.
Another approach would be to make GRID_LOCK a list of lists like GRID, and then have one lock for each cell on the board. This way, a worm thread is only blocked from acquiring the lock if it is trying to move to the same cell as another worm.
Hello,I'm learning multithread programming from this post.The concept in this post seams easy to understand and quite funny.You're so brilliant!
Hello,I'm new to python multithread programming.I can't understand the example which the out put is 199083 20000.I think I have not got the idea of
threading lock.You defined MY_LOCK outside the class and then used it directly
inside the class.I don't know how it works.Please help,thanks.:P
Hello --
I got a lot out of your PyGame tutorials! Thanks!
I'm going to read and re-read this article after I'm out of work, but I was wondering about this threading business because I have recently encountered two issues with PyGame, both of which I *think* have to do with this threading stuff but it is hard for me to say.
First off - no, I did not try to implement threading myself. :P I only noticed that PyGame seems to makes independent use of threading.py after running cProfile on the bigger of two projects that'd been giving me grief.
In a situation where PyGame seems to be independently determining where and when to implement threading, do you have any insight as to how to ensure that things occur in the 'intended order', or how PyGame decides to start its own threads? I've encountered some weird runtime stuff that I only know how to resolve by writing what my inexperienced eyes regard as bad (read: repetitive) code, or some one-liners that are not *terrible* but not really as readable as I'm used to. example:
Anyway, I don't mean to send you on a goose chase -- if you've any insight into how PyGame internally determines when to send something off on its own thread or when it decides not to (or if I'm totally, totally wrong on my assumption), I'd appreciate it!
Thank you for the tutorial. Good theory discussion with working code.
Awesome article.
I think there's a minor typo here: "The reason this happened is because we did not use locks around the code the reads and modifies the TOTAL variable"
I thought this program is a really good explanation of multi-threaded programming, so I typed it all in, then corrected my mistakes, but it will not run. I am using Python3 and get an error before the board contents are drawn:
NameError: global name 'Worm' is not defined.
This arises from the line:
worms.append( Worm) ) - within Main when starting the threads. I do not appear to have made a typing error, so is the problem to do with using Python3?
OK, I solved the problem: my typing, of course. I had indented the 'main' module, so Python presumably considered it part of 'Worm'. | http://inventwithpython.com/blog/2013/04/22/multithreaded-python-tutorial-with-threadworms/?wpmp_switcher=mobile&wpmp_tp=2 | CC-MAIN-2015-32 | refinedweb | 7,393 | 73.47 |
7. The PopLibs libraries
The PopLibs libraries provide application-level functions that can be used in programs for the IPU. The available libraries are listed in the table below.
Examples using the library functions can be found in the Graphcore GitHub tutorials repository, including a tutorial on PopLibs operations.
For details of all the functions in the PopLibs libraries, see the Poplar and PopLibs API Reference.
7.1. Using PopLibs
The PopLibs libraries are in the
lib directory of the Poplar installation. Each
library has its own include directory and library object file. For
example, the include files for the
popops library are in the
include/popops
directory:
#include <include/popops/ElementWise.hpp>
You will need to link the relevant PopLibs libraries with your program, in addition to the Poplar library. For example:
$ g++ -std=c++11 my-program.cpp -lpoplar -lpopops
Some libraries are dependent on other libraries, which you will also need to link with your program. See the Poplar and PopLibs API Reference for details. | https://docs.graphcore.ai/projects/poplar-user-guide/en/latest/poplibs.html | CC-MAIN-2022-33 | refinedweb | 168 | 56.86 |
In our ongoing training series, a number of questions come up each time, I list them out with their respective answers below!
Couchbase 101 - Architecture, Installation and Configuration
My Ruby based load generator can be downloaded here:
Q: We're using 2.0 in production, what's the best practice to upgrade to 2.2?
A: You can upgrade your cluster in three different ways. The first is the Swap Rebalance online upgrade and is a great way to maintain uptime of your cluster while also upgrading. To do a swap rebalance, you add an equal number of new nodes running Couchbase 2.2 as your current cluster size, but before rebalance, remove the Couchbase 2.0 nodes. When you rebalance, since you are adding and removing the same number of nodes it will be the most efficient. Read more about it here: You can also do an offline upgrade by using cbbackup/cbrestore from the old cluster to the new, or you can use cbtransfer (but you have cease operations that create data before transferring!)
Q: Is Couchbase Server free or do you need licenses?
A: Couchbase Community is free for development and production for any number of nodes. Couchbase Enterprise is free for development, any number of nodes, and up to 2 in production. Beyond 2 nodes in production requires license, but our licensing also includes Enterprise Support bundled with it!
Q: Are the application servers physical devices or can they be VM's?
A: Applications servers and Couchbase servers can both be physical machines or virtual machines.
Q: Could Couchbase be used as an alternative to Clearquest/Clearcase or is this product strictly used for document control?
A: Couchbase is a data store that you build applications on top of, Clearquest/Clearcase are applications that are built on data stores, so comparing with Couchbase is not really "apples-to-apples".
Q: Can Couchbase be monitored with SNMP? Is it possible to integrate Couchbase monitoring with Solarwinds?
A: You can use SNMP to monitor the server itself just like any other machine, but Couchbase itself doesn't have SNMP integration as of now. As far as I know I am not aware of a standard out of the box integration with Solarwinds, but I imagine it wouldn't be difficult if you can extend Solarwinds to poll http/JSON for information and have custom triggers.
Q: Does couchbase support fail-over to a standby couchbase node? How will the stored data synchronize with a standby couchbase cluster?
A: We don't have an auto-failover to a standby node, mostly because failover involves promoting replica partitions to active partitons. If you were to use a standby node you'd have to have a copy of all the data in the cluster on it because you don't know which node (and which partitions) are going to be inaccessible/fail. This wouldn't make sense to do, instead we have replica partitions, and in the case of a failure, failover will promote replica partitions to be active. If you are maintaining an entire separate cluster on standby (and using Cross Data Center Replication (XDCR) to replicate your active cluster data to it), you would have to script your own logic for deciding when to swap clusters.
Q: Can the metadata values (i.e id's) be automated by the CB system (i.e. - auto count for id's), or does it pass that responsibility to the application?
A: All id's (keys) are the application's responsibility, there aren't mechanisms built into Couchbase for generating ID's. However, you can use Atomic Counters to act like IDENTITY columns in RDBMS's. Check out the Couchbase 103 webinar for more info on some patterns.
Q: Can we store mp3 or any Audio or Video files in Couchbase?
A: Of course you can store anything in Couchbase, simple data types, JSON and binary data of any type (MP3, JPEG, PNG, etc.). You are only limited by a 20MB per "document" limit. However, video files tend to be quite large, in which case, you'd be better served to use a CDN system designed for large files and streaming them to large audiences, and store the asset metadata for the file (like it's title, url to stream it, etc.) in Couchbase.
Q: How much RAM should we leave for OS?
A: It depends, if you are using Views heavily then it would be prudent to allocate more RAM for filesystem cache. We generally recommend leaving about 40% of available RAM to OS (so configure Couchbase to use 60%) and that gives good performance all around. If you are not using Views then you can allocate more RAM to Couchbase. It might be good to reference the sizing guidelines in this blog post:
Q: What is a high number of OPS on a 16GB 4 core system?
A: Another "it depends", it's going to be strongly influenced by the network speed and ability to send binary operations through the wire to Couchbase. A 16GB 4-core box on Amazon AWS won't be the same as a physical box connected to the load generator(s) with 10GigE's. You won't see this kind of performance, on AWS! But it goes to show that it's not really Couchbase itself limiting the ops/s, but rather networking and ability to deliver operations to Couchbase through the binary sockets.
Q: How many replicas do you recommend?
A: Generally most people are comfortable with just 1 replica, however there are some that want the safety of 2 replica's, I am not aware of a customer that uses 3 replica's. Of course you need to beef up your servers for more replica's with more allocated RAM and potentially CPU as well if you are going to index the replica's as well. Ultimately that decision has to be yours as you are most aware of the data and importance of securing the data against any data loss, etc.
Q: For sdk client connections, do we need to add all the server ips manually?
A: There are a number of ways to handle this, through DNS for instance where you have your app servers always connect to a CNAME or A record, and list all the cluster machines (or auto-register them) with the A record. Or you can put the IP's in a config file that is updated across the servers (or centrally located), or type in the ip's into your application startup code, etc.
Q: Concerning the OS it's avalailable for Ubuntu. Is there any problem if using other distribution like Debian?
A: I believe this is fine, but I haven't tried it myself. The OS's that are listed on the download page are also the ones that are heavily tested. I know one of our engineers got Couchbase working on Joyent SmartOS, but it's not an official download, etc.
Q: Is metadata not persisted?
A: Metadata is also persisted to disk of course, but it is also always kept in RAM as well. Documents will be in RAM if there is enough RAM available in the bucket (across the cluster) to contain the values. If not, Not Recently Used (NRU) is used to eject document values to disk.
Q: What is the use of IO workers? Why is it divided when we add new nodes?
A: IO workers are used to read/write from disk, you can increase the number of threads (workers) in your bucket configuration depending on your capacity to do so (if you can only handle 4 workers, setting it to 8 won't change performance). When you add more nodes to your cluster, you increase your IO workers linearly, with each new node adding the same amount of IO workers (for their own IO). They are not "divided" they are allocated per node.
Q: Did you say, "High water mark has change recently from 80% to 90%"? What about low water mark? is it still 60% or changed?
A: These are actually configurable parameters, the default settings are roughly 80% for low watermark and 90% for high watermark. At the low watermark, ejection of replica partition data from RAM will begin, and at the high watermark ejection of active partition data from RAM will occur. These are also configurable parameters at the bucket level, see:
Q: How can I delete a data bucket?
A: In the Admin interface you can delete a bucket by clicking on Data Buckets in the top nav, click on the triangle next to the bucket name to expand, click the Edit button on the right side, and at the bottom of the modal dialog that pops up there is a Delete button. You can also delete buckets programmatically from the SDK's.
Q: How does Couchbase relate to Apache CouchDB?
A: The founders of CouchDB (Damien Miller and JChris Anderson) left the Apache CouchDB project and joined/merged with NorthScale/Membase as Founders of Couchbase along with Steve Yen and Dustin Sallings (of Northscale/Membase). There are many similarities in the Views Map-Reduce style and query syntax that will be familiar to CouchDB users, however, there are also many important differences. Couchbase is an independent for profit open-source company with it's own independent code base that is not tied to CouchDB in any way. The binary CRUD operations however resemble memcached/membase rather than anything CouchDB related. They certainly could have picked a less confusing name...
Q: How does the architecture deal with out-of-balance partitions?
A: Actually our hashing and partition strategy has shown over many many years to be very well distributed which is why we are still using it.
Q: How are node failures handled?
A: There are two ways to handle node failures, the first is by enabling auto-failover. In auto-failover if a node is unreachable for 30 seconds then that node will automatically be failed over and replica's will be promoted to active. The alternative is to manually failover a node (or script it to be automatic but based on your own monitoring solution), which can give you the flexibility to decide all the parameters and timing for node failovers.
Q: I tried to install couchbase enterprise2.2.0 server on fedora17 but got failures for libcrypto.so and libssl.so, how to I fix this?
A: Yes because of an erlang dependency in the erlang core library you need to yum install openssl098e
Q: What does acid means for Couchbase?
A: Couchbase supports ACID "transactions" on a per-document level. You can use either CAS (Check and Set/Compare and Swap) for optimistic concurrency or use GetAndLock to actually lock a document for pessimistic concurrency scenarios. Transactions are generally much more required in Normalized RDBMS data stores. The reason is because of Normalization, data structures are often broken up into many different tables, without Transactions, data integrity collapses quickly. In NoSQL scenarios like Couchbase, since data is far less normalized, transactions are generally less necessary than in the RDBMS world. Using the concurrency model you can create transactions. We also have durability operations where you can ensure that data has made it to a replica and/or on disk.
Q: If one of the nodes in the cluster goes down, how does map on client appserver get updated and to the data in that node?
A: When a failover is triggered (either automatically or manually) this promotes the replica partitions to active. If a node gets a CRUD operation for a partition number that it does not own, it returns a "Not My VBucket" error to the sdk client, the sdk client knows how to handle this. This error indicates that the cluster map is out of date/out of sync and it automatically requests a new one from the persistent HTTP connection. There are only 2 scenarios where a cluster map changes, on rebalances and failovers. So those are the times where topology changes and cluster map is changed, and the client will see that on the first operation that returns the "Not My VBucket" error.
Q: Is 20 GB hard limit on disk for each server?
A: There is no storage limit, what you might have misheard was that there is a 20MB limit per document value in Couchbase.
Q: Is the append only disk writes similar to journaling and what OS file system uses?
A: Yes, It is a similar strategy but in our own format.
Q: Is there a way to monitor free disk size? will it send pager or email notification when it reaches thresholds (say 80% full)? Same case for CPU / MEM usage?
A: We don't have these sort of notification systems built in, but it's certainly trivial to integrate your own system for doing this. These are more like VM/Computer monitoring rather than Couchbase specific ones, but you could easily create an integration for your own custom parameters. All the information in the graphs of the Admin console are available as JSON with http requests.
Q: What is a bucket?
A: A bucket is a "database" a collection of data. It also is a namespace for the data, so all keys need to be unique within a bucket. It also acts as a namespace for Views, design documents and views can only access data within the bucket they are defined in.
Q: In the case there are multiple couchbase server and multiple application server, can all the application server connect to the same couchbase server (same ip address)? In this case the load balancing is automatically done by couchbase or is better to distribuite connections between application servers?
A: Great question, it's important to actually NOT put a load balancer between the application servers and Couchbase. Because of the key-hash partitioning, data is already automatically distributed across the Couchbase cluster. The app servers will connect and interact with the Couchbase cluster nodes directly and because of the partitioning be already "load balanced" in the sense that they will be doing CRUD operations across the cluster based on the hashing of the keys. The app servers and sdk clients will maintain open connections to each node in the Couchbase cluster, and generally, a single shared connection is all that is needed for each app server.
Q: What happens when a document is being accessed by client apps while the rebalancing is in progress of that document containing bucket?
A: All operations continue as normal during rebalance, and normal operations are prioritized over rebalancing. This means that it will continue to rebalance while you are doing operations. Of course, generally speaking, it's best not to initiate a rebalance during peak usage! However, it depends on what kind of usage level and configuration whether it might cause a slowdown or not. In most cases it's unnoticeable.
Q: What is the similar BLOB type in Couchbase?
A: You can store binary data directly as a value using the SDK, since we don't have defined or enforced schema, you don't need to specify that it is a BLOB. The BLOB is simply the value of the "document".
Q: When I perform a set operation, the document first goes to the RAM and from then it is sent to the disk write queue and replication queue. what if the node crashes when the document is on the RAM?
A: Whenever there is a failure in any system of any type (any type of database, app server, mobile phone, etc) there is always a potential for data loss. All strategies are attempts to minimize it as best as possible, but in this exact scenario, yes, it is possible to make it into RAM but not make it into disk-write-queue and/or replication queue if the machine crashes hard at that exact moment between storing and RAM and inserting into the queues.: | http://blog.couchbase.com/couchbase-101-q-and-a | CC-MAIN-2015-35 | refinedweb | 2,672 | 60.95 |
I:
FRDM-KL27Z Board
Compared to the KL25Z, the KL27Z board has only half of the flash size (64 KByte instead of 128 KByte), but runs at same frequency (48 MHz ARM Cortex M0+) and has same amount of SRAM (16 KByte). But it does have two push buttons which I miss on the FRDM-KL25Z.
The FRDM-KL25Z is sold for CHF 14.82 by Farnell, and the FRDM-KL27Z costs CHF 20.07! So this is a big price increase. But I kind of like that board as it comes with the jumpers populated and is one of the newer Kinetis devices with bootloader and crystal-less USB operations.
The KL27Z is supported by the new Kinetis SDK v2, but that SDK does not include Processor Expert :-(. Porting existing KL25Z applications to the KL27Z is easy: one of the main advantages of Processor Expert is that it makes porting from one microcontroller to another one a matter of a few mouse-clicks.
💡 The KL27Z is supported by the Kinetis SDK v1.3 which *does* include SDK Processor Expert components. But these components are incompatible with any previous non-SDK components, and I rather want to be on the latest SDK which is v2.0 now. Porting later things from v1.3 to v2.0 would be time-consuming, as the API between the SDKs are very different (or better to say: completely incompatible). On the plus side, the SDK v2.0 is easier to use, so I want to go with that one and not spending time on the v1.3.
This post is about the usual starting points for embedded projects: blink an LED 🙂 As IDE I’m using Eclipse Luna in Kinetis Design Studio V3.1.0.
Kinetis SDK with Kinetis Expert Portal
See this article about how to use the Kinetis SDK v2 with Kinetis Design Studio. On I have created a new configuration for the FRDM-KL27Z and built the SDK package for the board (see “First NXP Kinetis SDK Release: SDK V2.0 with Online On-Demand Package Builder“):
I have placed content (unzipped) into the following folder on my disk:
C:\nxp\KSDK\SDK_2.0_FRDM-KL27Z
Project Creating
In Kinetis Design Studio, I use the wizard menu to create a new project:
I have added the location of the SDK and select it for my new project:
In the next step I select the board with minimal drivers:
Then finish creating the project. This creates my new blinky project:
Build and Debug
Click on the Build hammer to build the debug configuration:
This should build without errors, so I can debug it:
💡 make sure your debug configuration has not the problem described here “Solving “Launching: Configuring GDB Aborting Configuring GDB”“
- Select project
- Press debug button
- Select debug connection
- Press OK
💡 The FRDM-KL27Z comes with the P&E OpenSDA firmware loaded by default.
With this, I’m successfully debugging the FRDM-KL27Z board the first time 🙂
Blink!
That project does not do anything, so I have to add the code to blink the LED. Looking at the schematics shows the following pins are used:
- Red: PTB18
- Green: PTB19
- Blue: PTA13
So I need to
- Enable the clocks for Port A and B. Without the peripheral clocked, it will not work or even will create a fault when I try to use it.
- Mux the pins. Every pin can be routed (or muxed) to different ports.
- Configure the pins. This configures the pin for the specified operation (e.g. input or output).
Later I can use the pins in the application e.g. to turn the LED on or off.
Init Pins
In main() there is already a call to BOARD_InitPins():
#include "board.h" #include "pin_mux.h" #include "clock_config.h" /*! * @brief Application entry point. */ int main(void) { /* Init board hardware. */ BOARD_InitPins(); BOARD_BootClockRUN(); BOARD_InitDebugConsole(); /* Add your code here */ for(;;) { /* Infinite loop to avoid leaving the main function */ __asm("NOP"); /* something to use as a breakpoint stop while looping */ } }
In the project created this calls the BOARD_InitPins() inside pin_mux.c:
#include "fsl_device_registers.h" #include "fsl_port.h" #include "pin_mux.h" /******************************************************************************* *); }
Currently it does initialize the UART (which I’m not interested in for now). It already enables the clock for port A with
/* Ungate the port clock */ CLOCK_EnableClock(kCLOCK_PortA);
Which uses the following interface from fsl_clock.h:
void CLOCK_EnableClock(clock_ip_name_t name);
So I add this to enable the clock for port B:
CLOCK_EnableClock(kCLOCK_PortB); /* enable clocks for port B */
Next I set the muxing for each pin and set the ‘mux’ for GPIO function. fsl_port.h has the following interface to mux a pin:
void PORT_SetPinMux(PORT_Type *base, uint32_t pin, port_mux_t mux);
To mux all three LED pins I use */
Next, the pins are configured as output pins. For this I need a configuration struct like this:
static const gpio_pin_config_t LED_configOutput = { kGPIO_DigitalOutput, /* use as output pin */ 1, /* initial value */ };
The first entry in the struct configures the pin as output pin, and the second entry is the initialization value. As the LEDs are connected with the cathode side to the microcontroller pin, writing a ‘1’ (logical high) will have the LED turned off.
Because that struct and interface is in a header file, I have to include it:
#include "fsl_gpio.h" /* include SDK GPIO interface */
The following interface in fsl_gpio.h is used to initialize a pin:
void GPIO_PinInit(GPIO_Type *base, uint32_t pin, const gpio_pin_config_t *config);
To initialize all pins I can use: */
I have highlighted the needed lines for the 3 LEDs below:
#include "fsl_device_registers.h" #include "fsl_port.h" #include "pin_mux.h" #include "fsl_gpio.h" /* include SDK GPIO interface */ static const gpio_pin_config_t LED_configOutput = { kGPIO_DigitalOutput, /* use as output pin */ 1, /* initial value */ }; /******************************************************************************* *); /* additional clock and configuration for RGB LEDs (PTA13, PTB18 and PTB19) */ CLOCK_EnableClock(kCLOCK_PortB); /* enable clocks for port */ }
Blinking the LEDs
To blink the LEDs I can use the following API from fsl_gpio.h:
void GPIO_ClearPinsOutput(GPIO_Type *base, uint32_t mask); void GPIO_SetPinsOutput(GPIO_Type *base, uint32_t mask); void GPIO_TogglePinsOutput(GPIO_Type *base, uint32_t mask);
With above functions I can put the pin either low, high or toggle it. The first parameter is a pointer to the port, followed by a bitmask for the pin (or multiple pins if they are on the same port).
To slow down the blinking, I add a very simply delay function:
static void delay(volatile uint32_t nof) { while(nof!=0) { __asm("NOP"); nof--; } }
💡 I have marked the parameter with ‘volatile’ to prevent compiler optimization, otherwise the compiler generate very fast code for that delay function.
With this, I can add my blinky LED functionality to the main() loop:
int main(void) { /* Init board hardware. */ BOARD_InitPins(); BOARD_BootClockRUN(); BOARD_InitDebugConsole(); /* Add your code here */ for(;;) { GPIO_ClearPinsOutput(GPIOA, 1<<13u); /* blue led on */ delay(1000000); GPIO_SetPinsOutput(GPIOA, 1<<13u); /* blue led off */ delay(1000000); GPIO_ClearPinsOutput(GPIOB, 1<<18u); /* red led on */ delay(1000000); GPIO_SetPinsOutput(GPIOB, 1<<18u); /* red led off */ delay(1000000); GPIO_ClearPinsOutput(GPIOB, 1<<19u); /* green led on */ delay(1000000); GPIO_SetPinsOutput(GPIOB, 1<<19u); /* green led off */ delay(1000000); } for(;;) { /* Infinite loop to avoid leaving the main function */ __asm("NOP"); /* something to use as a breakpoint stop while looping */ } }
Running it on the board, and I have a nice ‘blinky’ board application 🙂
Summary
I kind-of like the FRDM-KL27Z board which is supported by the SDK v2. But the SDK v2.0 does not come with Processor Expert which makes porting all the existing projects to the board difficult and time-consuming. And the price of the board is much higher (CHF 20) than the FRDM-KL25Z (CHF 15). I like the two push buttons and the pre-installed jumpers, but that’s a big price add-on with for half of the FLASH memory. And the FRDM-KL25Z is currently better supported by software and tools.
So while I kind a like the board, I’m not sure if I should switch to that board. Students have to pay more if they want to keep the board, they have to learn the new SDK v2 (easier than SDK v1.3, but still more time), and they will be slower and less productive than before with using Processor Expert. With Processor Expert it is a matter of minutes to run a blinky LED application on a brand new board. As someone said: “faster than you can eat a slice of pizza”). Now bigger slices of pizza or needed? Sounds more like to change the type of of lunch? Not sure what will be healthier ;-).
On the bright side, the SDK v2 is much cleaner and easier to use compared to SDK v1.3. Using the SDK v1.3 was not reasonable before, now the SDK v2 could be considered. What I’m thinking about is that there could be some ways to continue using Processor Expert with the SDK v2: to combine the best of two worlds? Maybe worth to explore?
The project created in this tutorial is available on GitHub here.
Happy Blinking 🙂
Links
- FRDM-KL27Z Board:
- NXP Kinetis Expert:
- How to use KDS v3 with SDK v2:
Brilliant, thanks Eric. I was just sitting down to start playing with my new KL27Z and your blog post appeared at the same time 🙂
It is a shame they (NXP) have dropped Processor Expert. I wonder why they made that decision…
Hi Geoff,
ah, good timing then :-).
While I liked my FRDM KL25Z board, I’ve moved my applied electronics class to the Teensy LC board. It is an even cheaper board that can be easily plugged into a breadboard, and the Teensyduino development system, while much less featureful than the Kinteis SDK is much easier to use. (I wasn’t using Kinetis SDK with the FRMD KL25Z board, but MBED.ORG, which was rather irritating to use as a web-only application with a crummy and slow interface.)
Yes, I looked at the Teensy and Teensy LC. I have used the Teensy in a few projects, but the fact that you cannot debug it (there is no SWD/JTAG header) removes it from the list for future projects. I don’t like if development tools like Teensyduino have to be used. For your classes things are probably different, but in my classes students need to learn real tools (and not things like mbed or Teensyduino): they need to be able to develop safety critical applications and printf() style debuggging won’t help or will be catastrophic in my view.
I like the Teensy because they are bread board friendly, but again, you cannot debug it. That’s why we came up with the tinyK20 board () which solves that problem.
The Kinetis SDK v2 got better now compared to the V1.x, so I’ll give it a try over the next months to explore it more.
What if there where a series of low cost boards available for Kinetis that had no on board Jtag converter or other superfluous add-ons such as accelerometers & RGB leds. One only needs to purchase a J-Tag adapter or two for around $60US each. If such boards could fit into a breadboard as your TinyK20 board does perhaps that would be the way to go. Perhaps these boards could be produced for $5 to $7US each depending on the MCU fitted. If Texas Instruments can sell MSP432 boards for $4.32US there is a challenge for NXP …
Hi Chad,
yes, I agree with you: the extra components (which might not be used at all) adds up to the board costs and especially board size (which increases costs too).
Having the debug adapter separate is the way to go for me, and as a low cost option it does not have to be expensive: NXP has the LPCLink () which is sold for CHF 20, and in my view that one could be simplified and less expensive too. I’m ok to have very low cost run control device like this, I still want to have a good solid run control device like a Segger or P&E because they are usually faster and can do difficult jobs better.
As for the price point you mention: I think it is all about volume. And to me, there are too many Freedom boards so there is not enough volume for each. Maybe with the exception of the FRDM-KL25Z as this one is very popular, and it is one of the least expensive (best bang for the buck). They could shrink the package (needs less space in the warehouse, easier to ship). But what I think adds to the costs that they are sold through Farnell/Mouser/etc: they must have some marging too. If that $4.32US matches the real costs or if this cost is reduced with some marketing money, I don’t know. Compared to the $5US for a Raspberry Pi Zero () (which does not cover the costs I believe) I think there might not be much cost reality in the market? I’m ok to pay a valid price for a board, if the features match my needs, and if I can use it with my software and tools.
Hi Erich,
It is good to see a simple example like this (or extremely complicated depending on your point of view) for KSDK2.0.
How can anyone in NXP think that this is progress compared to using Processor Expert. Do any of the decision makers have a grasp of this?
It truly would be fantastic to have PE and KSDK 2.0 working together. For the time being I am using PE and 1.3 reasonably successfully, but know that I will need to move on at some point.
Best Regards
Jim
Hi Jim,
“have PE and KSDK 2.0 working together”: what I’m considering are some ways how I could transfer my work with the components to the SDK v2 world. I have done some experiments over the week-end, and it is promising: I can use things like LED, GenericI2C, Wait, Utility, GenericBitIO and other components now with the SDK v2. However, this only applies to the McuOnEclipse components, and is more about re-using the IP/Sources with the SDK. It won’t solve the problem that there is no CPU component with the SDK which would know everything about the hardware. I belive there is no workaround for this. But at least I could continue to use all the drivers and software, and possibly migrate easier to the ‘new world’.
Erich
Opinions:
I feel like with the merger between NXP and Freescale, the “New NXP” hasn’t spent the time on the education and hobbyist market at the moment. Once all the fundamentals with sdk’s, board, and chip designs are all worked out to strengthen other market shares, then they will come back to a different version of Processor Expert.
I assuming KDS will merge with LPCXpresso, or one will get phased out, and KSDK2.0 is just a transition phase for older freescale chips and boards to easily transfer to the next IDE.
Excellent, thanks Eric.
Using your blog + PE knowledge I implemented this RGB_toggling on K64F.
What will happen with all PE goodies like complex peripherals support or “Component Library “ (like LCD component)?
Each individual will have to develop it by himself for each platform?
All new MCU /SOC’s specs are covered by hundred/thousand page manuals, tool like Processor Expert is must.
Processor Expert was one of the Freescale advantages. In my opinion aborting Processor Expert without providing any alternative will be NXP strategic mistake.
Thanks again for your helpful and excellent blogs as a beginner I use them a lot.
Hi Shaul,
Thanks for your feedback, appreciated 🙂
That’s exactly the reason why the students any myself like Processor Expert: it basically saves me a lot of time reading all the referencen manuals. Up to the point that the reference manuals are not clear or even wrong, but I get working code out of Processor Expert. As for the components: All what I try with experiments now is how to get my components (LCD, RNet, BitIO, etc) moved over to the SDK world. You might see where I am at with looking at my most recent GitHub commits here:
It is still a long way to go, and I still need to come up with a good article about this. But at least for some code it seems there is away to get Processor Expert (at least some of it) with the Kinetis SDK.
Hi Erich,
This is a nice article on KL25 and KL27 FRDM boards and the features development tools offer to support them.
I’d say KL27 can be of interest if one wants to explore the FlexIO IP and use it to come up with a solution when other (i.e. standard) peripherals are either inefficient or simply cannot perform the given task at all.
The KL27 reference manual provides basic examples on how to configure the FlexIO as a UART, SPI, I2C, etc. However, the FlexIO can do much more than that. I had a lot of fun exploring this IP using a KL27 FRDM board and building several applications supporting unique serial protocols and really strict timing requirements when handling external signals.
Regards,
Zeljko
Hi Zeljko,
yes, the FlexIO thing is very interesting, and it does exist on other FRDM boards too. But the the FRDM-KL43Z is more attractive in my view?
On the KL27 the I2C hardware is broken as explained here:
Repeated Start does not work correctly. Freescale/NXP has yet to issue a proper errata about the problem or better yet fix the parts.
Do you have any suggestions for a work around besides switching to bit-banging or FlexIO (which require a hardware change because they are on different pins)?
Hi Bob,
I have not used I2C on these new parts (yet?), so I might not be the best person to answer your question. But I was running into another nasty I2C bug with the KL25Z which exists on many other Kinetis devices (). So not sure if this is related to the same thing? Maybe the same workaround (not using clock devider) solves the problem on the KL27 too?
Alas it is yet an other new I2C problem, that is unique to the KL27 family.
I2C seems simple yet no manufacture every seems to get it correct. 😦
yes, and even worse: because the silicon designers are copy-pasting their IP from one design to another, they distribute the silicon bugs like a virus to all other devices 😦
I don’t think the KL27 I2C is broken, it just works differently due to double buffering. In polled mode, I have repeated start working by checking the EMPTY flag in the S2 register after setting RSTA in the C1 register. I haven’t yet checked how this would work in interrupt mode.
As usual with I2C, it’s best to read, re-read and sill re-re-read the user manuals, this applies to all MCUs I’ve worked with so far…
Pingback: Tutorial: Using Eclipse with NXP MCUXpresso SDK v2 and Processor Expert | MCU on Eclipse
Thank you so much”!!
Your post make me see how things work with kinetis and their SDKs. After two days of trying to make my KL25z work with MCUXpresso, Finally I was able of make the Red Led to blink.
Cheers from México!.
Hi Daniel,
yes, I know myself that getting started with the SDK is not simple and easy. Glad to hear that this article was helpful for you!
Cheers from Switzerland!
Pingback: Black Magic Open Source Debug Probe for ARM with Eclipse and GDB | MCU on Eclipse | https://mcuoneclipse.com/2016/03/13/tutorial-blinky-with-the-frdm-kl27z-and-kinetis-sdk-v2/ | CC-MAIN-2020-50 | refinedweb | 3,279 | 69.92 |
[meta] Ensure Geo replication can handle GitLab.com-scale update load
So far, our testing of Geo at GitLab.com scale has focused on backfill and basic correctness - can we successfully replicate all instance state on a static primary to a static secondary? When we create/update/delete a single repository/file/record, does the secondary work?
We need to get some assurance that Geo's replication architecture will handle updates at scale.
Investigating current replication requirementsInvestigating current replication requirements
(For almost every number mentioned, I think we should be interested in both average and peak rates. We might also want some measure of variance on the average, so we can get a feel for how "spiky" the demand is.
Events communicated by the Geo event logs ("repository" normally means "project"):Events communicated by the Geo event logs ("repository" normally means "project"):
- Repository updated ("git push", creating commits in UI/API)
- Repository deleted (UI/API action, namespace removal sends an event per project)
- Repository renamed (UI/API action, namespace removal sends an event per project)
- List of selective sync namespaces changed (ignore)
- Repository created (UI/API action) (why do we need this?)
- Repository migrated to hashed storage (V1 or V2) (ignore)
- LFS object deleted
- CI artifact deleted
We don't send events for these actions:
- LFS object added
- CI artifact added
- Upload added
- Upload removed
However, we may have an interest in these anyway, as backfill causes them to be replicated by the secondary.
Numbers we want to collect for GitLab.comNumbers we want to collect for GitLab.com
Event log depends on postgresql replication.
- Rate of data transfer for current postgresql replication
- Replication lag for current pg replication
- Rate of
git push(+ UI/API) actions
- Rate of data transfer for
git push(+ UI/API) actions
- Rate of project creations, renames and deletions
Numbers we may not want (if we depend on object storage, we might be able to ignore them)
- Rate of LFS uploads
- Rate of data transfer for LFS uploads
- Peak rate of LFS object deletions
- This happens in bulk via
RemoveUnreferencedLfsObjectsWorker
- Rate of artifact uploads
- Rate of data transfer for artifact uploads
- Rate of artifacts removals
- Some (most?) removals are in bulk via
ExpireBuildArtifactsWorkerand
ExpireBuildInstanceArtifactsWorker, so we may only want peak rates here.
- Rate of uploads
- Rate of data transfer for uploads
- Rate of upload removals
Adding these events to the log cursor will increase postgresql replication load, but hopefully this will be marginal compared to the rest of the database. Once we have numbers, we can make an estimate.
Once backfill is complete, we can (naively) assume that git data replication
load for each secondary will exactly match the
git push load on the primary.
It's a reasonable first-order approximation, and is more likely to over-state
the load than under-state it.
Investigating current replication capacityInvestigating current replication capacity
This is a fairly exploratory issue. We need to start with a primary and a fully-replicated secondary, apply a sustained period of database and filesystem writes to the primary (creating new issues, uploading files and LFS objects, renaming and updating repositories and wikis, etc), and observe the replication process in action.
We need to either shadow GitLab.com traffic (I'm not sure this is possible), or get some numbers from GitLab.com to tell us what rate and mix of updates we should be sending to our testbed primary and generate the load ourselves with, e.g., #3117 (comment 47093268)
Some important questions:
- How does postgresql replication lag change? What rate of sustained database updates can we maintain before we start falling behind? Can we add hardware to scale this?
- The Geo log cursor operates by adding and removing events to various tables on the primary. Does this generate substantial additional load on postgresql? How many events/second can we enqueue before we start to affect postgresql replication negatively?
- The Geo log cursor is a daemon on the secondary that processes those enqueued events. How many events/second can it handle without falling behind? What are its resource requirements while doing so? Can it keep up with ordinary and exceptional GitLab.com traffic?
- The secondary is notified of changes to repositories on the primary, and it enqueues an unbounded number of
git fetchoperations in response to log cursor events. Is this sustainable at GitLab.com scale? Should we apply concurrency limits?
- Once the updates to the primary have finished and the secondary claims to be synchronized again, is the secondary actually in a consistent state? Have unexpected race conditions removed or broken repositories or files? Did any events get missed? etc.
I reckon this could do with a GCP Migration label /cc @andrewn | https://gitlab.com/gitlab-org/gitlab-ee/issues/4030 | CC-MAIN-2018-34 | refinedweb | 781 | 52.39 |
return the current read/write position of a file
#include <stdio.h> long int ftell( FILE *fp );
The ftell() function returns the current read/write position of the file specified by fp. This position defines the character that will be read or written by the next I/O operation on the file. The value returned by ftell() can be used in a subsequent call to fseek() to set the file to the same position.
The current read/write position of the file specified by fp. When an error is detected, -1L is returned. When an error has occurred, errno contains a value that indicates the type of error that has been detected.
#include <stdio.h> long int filesize( FILE *fp ) { long int save_pos, size_of_file; save_pos = ftell( fp ); fseek( fp, 0L, SEEK_END ); size_of_file = ftell( fp ); fseek( fp, save_pos, SEEK_SET ); return( size_of_file ); } void main() { FILE *fp; fp = fopen( "file", "r" ); if( fp != NULL ) { printf( "File size=%ld\n", filesize( fp ) ); fclose( fp ); } }
ANSI
errno, fgetpos(), fopen(), fsetpos(), fseek() | https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/src/ftell.html | CC-MAIN-2022-33 | refinedweb | 166 | 70.53 |
Writing Extensions in IntelliJ
IntelliJ includes out of the box Maven support therefore it doesn’t require for you to download any extra plugins.
Importing the project
After you have generated your project with our Maven archetype, choose File → New Project and select Import project from an external resource
Click Next and choose Maven
Click Next and input the source folder where your project was generated. Leave everything else as default.
Click Next and pick your main artifact
Click Next
Now, click Finish.
You IDE should start importing your Maven dependencies.
Resolving the schema
Now, you need to instruct your IDE to help it find you newly generated schema so you get all the benefits from auto complete and validation.
Open the namespace handler xml that was generated for you by the archetype.
You should see something along the lines of this:
Select Manually Setup External Resource and pick the schema under target/generated-resources/mule.
That should be it. Now you’re done. | https://docs.mulesoft.com/anypoint-connector-devkit/v/3.2/writing-extensions-in-intellij | CC-MAIN-2017-30 | refinedweb | 164 | 63.8 |
This is related to.
I've tried searching via the SDK and also direct to REST endpoints, but my results only return the default indexed fields. For instance:
curl -sku admin:changeme -d search="search index=_internal | head"
returns these fields:
_bkt
_cd
_indextime
_raw
_serial
_si
_sourcetype
_subsecond
_time
host
index
linecount
source
sourcetype
splunk_server
But the same search index=_internal | head in the Splunk Web UI returns many more extracted fields.
index=_internal | head
It does not seem to matter if I use an export, oneshot, or "normal" search. Setting the namespace specifically does not seem to matter either.
Any insight appreciated.
Returned to this issue today with a couple ideas, and one of them seems to work. By adding |fields * to the search expression, my SDK searches are now returning full verbose field extractions. Kinda un-intuitive. Gonna play with it some more but looks promising. I guess the idea is to specify the fields you want in the output using the |fields command.
bw
We tried that. Check the performance of your searches. If there's a lot of fields, it will be geometrically WORSE. At least it was in our case.
What if you reference one of the fields you're expecting? Such as search index=_internal log_level=* | head. I know that saved searches have a behaviour where they will only run extractions on fields that are used in the search.
search index=_internal log_level=* | head
We have this same issue. Our product uses the Java SDK to execute a saved search. If we don't mention the name of the field in the search, then we get no data for that field despite it showing up in the Splunk UI. We tried doing things like "| field " at the end but that causes the performance to fall off a cliff. The only solution seems to be to put "field1= field2=*" etc at the end of the query, which of course means our customers MUST modify the saved searches they want to use with our product. This is a bit unfriendly. | https://community.splunk.com/t5/Splunk-Search/Why-do-I-only-get-default-indexed-fields-via-REST-or-SDK-call/m-p/148098 | CC-MAIN-2021-31 | refinedweb | 345 | 73.68 |
Reserving disk space to keep Windows 10 up to date
Windows Insiders: To enable this new feature now, please see the last section “Testing out Storage Reserve” and complete the quest.
How does it work?
When apps and system processes create temporary files, these files will automatically be placed into reserved storage. These temporary files won’t consume free user space when they are created and will be less likely to do so as temporary files increase in number, provided that the reserve isn’t full. Since disk space has been set aside for this purpose, your device will function more reliably. Storage sense will automatically remove unneeded temporary files, but if for some reason your reserve area fills up Windows will continue to operate as expected while temporarily consuming some disk space outside of the reserve if it is temporarily full.
Windows Updates made easy
Updates help keep your device and data safe and secure, along with introducing new features to help you work and play the way you want..
How much of my storage is reserved?
In the next major release of Windows (19H1),. More details below..
Follow these steps to check the reserved storage size: Click Start > Search for “Storage settings” > Click “Show more categories” > Click “System & reserved” > Look at the “Reserved storage” size.
Testing out reserved storage
This feature is available to Windows Insiders running Build 18298 or newer.
Step 1: Become a Windows Insider.
The Windows Insider Program brings millions of people around the world together to shape the next evolution of Windows 10. Become an Insider to gain exclusive access to upcoming Windows 10 features and the ability to submit feedback directly to Microsoft Engineers. Learn how to get started: Windows Insiders Quick Start
Step 2: Complete this quest to start using this feature.
Aaron Lower contributed to this post.
Follow Aaron Lower on LinkedIn
Follow Jesse Rajwan on LinkedIn
You have a “How Does It Work?” section and didn’t explain at all how it works. How is the storage actually reserved? NTFS quotas? VHDX? What?
Hi D.Pope, that’s an insightful question. Using a VHDX or even a separate partition were potential options that were debated. Those would provide guaranteed space for storing files needed during update. However those files would be in a different file system namespace entirely, which would overly complicate longstanding code, and would likely degrade update performance (for example some copying would have to be done in the end, since C: is ultimately the intended destination for the updated files).
Instead we designed an elegant solution that would require new support being added to NTFS. The idea is NTFS provides a mechanism for the servicing stack to specify how much space it needs reserved, say 7GB. Then NTFS reserves that 7GB for servicing usage only. What is the effect of that? Well the visible free space on C: drops by 7GB, which reduces how much space normal applications can use. Servicing can use those 7GB however. And as servicing eats into those 7GB, the visible free space on C: is not affected (unless servicing uses beyond the 7GB that was reserved). The way NTFS knows to use the reserved space as opposed to the general user space is that servicing marks its own files and directories in a special way.
You can see that this mechanism has similar free space characteristics as using a separate partition or a VHDX, yet the files seamlessly live in the same namespace which is a huge benefit. It’s not quotas. Whereas quotas define the maximum amount of space a user can use, this mechanism is guaranteeing a minimum amount of space. It’s sort of the opposite of quotas.
Thanks for the reply Craig, definitely appreciate it. I was also thinking something along the lines of the MFT zone before I read your other reply below. It sounds like it really is a new NTFS feature, and very akin to reservations in ZFS and other enterprise filesystems. That’s an awesome addition to NTFS. Does the new feature introduce any incompatibility or interoperability issues with previous versions of Windows if the disk is attached to them?
Yes there are some minor issues if you mount the volume down-level, but nothing too serious. Firstly, down-level NTFS does not understand the reserve at all, so you won’t get any of the functionality. For example applications can plainly see and use all the space on the volume, etc. Pretty expected. Secondly, if down-level NTFS happens to change the allocation size of a file that is in the reserve (examples: extending a file, deleting a file), then that change in allocation won’t get recorded in the reserve “database” (NTFS persistently tracks how much of a reserve’s space has been used). When you take the volume up-level again, NTFS will have the wrong idea of how much space needs to be reserved. In practical terms this means the user may see the wrong amount of free space, or servicing might not get the precise guarantee it was expecting. But we do have mechanisms in place to automatically correct this.
So this is more or less similar to the MTF reserve area on an NTFS volume, except that that reserve is not just for expansion of the MFT (divided in 1KB clusters), but for any file content or metadata (in the non-resident area outside the MFT, using standard clusters, usually 4KB)
This would mean that “fsutil fsinfo ntfsInfo” would provide additional metrics; that area would then be used by specific processes providing an access token to tune in which free area the space should be allocated first. I just hope that NTFS would also be tunable to offer application specific areas for their own local space management, e.g. for Visual Studio when compiling and creating many temporary files, or for media players or web browsers or games for correct operation without always mxing everything on disk, and fragmenting it a lot only for many very temporary small files that create holes and degrade the performance of other stable files.
The same would apply when working with log files that grow incrementally and allocate very fragmented space: these log files once closed, may be rewritten outside their initial reserve area using large fragment sizes (up to 64MB). But NTFS could also offer an automatic tuning for files allocated and written incrementally (when they are closed, and after some delay, these files would enter into a list of files to automatically reallocate elsewhere: each process starting to create files would automatically get a reserve area of about 64MB, preferably from a contiguous free area for these operations; if that space is exhausted, another 64MB reserve is allocated, and a background worker would rewrite defragment the first 64MB to collect the fragments and free up the reserve for further reuse by the same process or another process).
I also hope that the current bug in NTFS about the random fragments created for *::$ATTRIBUTE_LIST will be fixed: over time, they spread everywhere on disk, but they are unmoveable, and it becomes impossible to reduce a volume size (this is a problem for example during system update when it needs to create a new partition for another recovery image, or basic space maangement, even if the volume has a very large “free” space, unfortuenately very fragmented, including for critical areas like pagefiles/swapfiles, or for incremental VHDX used by containers).
NTFS should have these “reserved areas” managed by affinity, avoiding two concurrent processes (possibly threads in the same process) to put havoc on the volume and highly fragmenting it (this may be fast at first, but over time it becomes slower and this is a major risk for data integrity and file recovery after application crashes or in case of hardware faults)
It’s not actually that similar to the MFT Zone. The MFT Zone specifies a range of clusters on the volume that are preferred for extending the MFT, for the purpose of minimizing MFT fragmentation. I would say there are two major differences between the MFT Zone and storage reserved storage as described in this blog post. First, the MFT Zone is not really a reservation. MFT extensions take from the MFT Zone first, then take from areas outside the MFT Zone. Whereas other allocations take from areas outside the MFT Zone first, then take from the MFT Zone. The space is totally usable by applications if the rest of the volume fills up and the only space left is in the MFT Zone. That’s not true in reserved storage as described here; applications will get ERROR_DISK_FULL once they’ve used up all the free space visible to them. Second, the MFT Zone applies to a particular region of the volume, whereas reserved storage as described here does not. Think of reserved storage being simply a promise that there will be X amount of clusters somewhere on the volume, possibly fragmented and spread out around the volume, that can be tapped for a predefined need, in this case Windows update. It’s space in a mathematical sense, not any specific space that you can point to. So it not really relevant for the purpose of combatting fragmentation which is what most of your suggestions are referring to. However those are good suggestions for us to think about.
One thing you can do today to reduce fragmentation for something like a log file that grows slowly over time is pre-allocate the log file up front. I mean not just allocation size (which can be trimmed off at handle close), but file size. But that means you need your own way to keep track of how much of that file is currently being used.
If you want a closer analogy to an existing NTFS feature, probably the closest one would be the USN journal. If you create, say, a 1GB USN journal on a volume, NTFS reserves 1GB of space on the volume to allow the USN journal to eventually grow to a size of 1GB. Users immediately see 1GB less available free space. It’s initially empty, as there are no USN records in it yet. As USN records get appended, space will get allocated on the fly (from anywhere on the volume, just using standard allocator policies) and will eat into that 1GB without users observing any further loss in available free space. Reserved storage as described in this blog uses very similar concepts but generalized to arbitrary files/directories on the volume..
Correct me if wrong, but I don’t think “Manage optional features” in Settings tells the whole story. For me, the total of things there is a pittance, and I have no additional languages installed, either. Yet, I have over 7GB reserved.
My guess is that one should also consider “Turn Windows features on or off” in “Programs and Features” in Control Panel, right? The big-ticket item that I have there is Windows Subsystem for Linux.
Every time a language or optional feature is added to your PC, we will increase the amount of reserved space by [download size] + [install size] for the optional feature or language. This is to ensure there is enough disk space to acquire and install the new version of these optional features and languages during a feature update, while the previous version is temporarily maintained in Windows.old incase the update is uninstalled. When optional features or languages are uninstalled, the size of reserved storage is reduced by the same amount.
This resizing applies to optional features and languages added via Settings, it does not apply to those in the “Turn Windows features on or off” dialog. The features in the “Turn Windows features on or off” dialog are already on your PC, regardless of whether they are enabled or disabled—so there’s no need to reserve additional disk space for them.
Clean installs of recent Insider builds show the total amount of reserved space very close to 7GB, which includes sizing adjustments for the optional features and languages included in the product by default.
So anyone that bought a previous Microsoft Surface w/ 32GB or 64GB of RAM in the past (which is made BY Microsoft and meant to run Windows 10) would have 7GB shaved off? Seems like a pretty silly idea.
If a user has two drives (e.g. one small SSD for the OS and a larger HDD for data) would they AT LEAST be able to specify which drive to use?
Truth is that Microsoft always had problems coming up with an elegant way of updating Windows. As time goes by that dreadful WinSxS keeps munching up space and only an upgrade to a new build can solve the problem. *NIX systems on the other hand handle all of that beautifully — from kernel updates to entire build upgrades. Something Microsoft never managed to do.
I’ve got some concerns about this. How will this work on 32 and 16gb EMMC devices? Do you plan on updating the system requirements for windows to include this addition 7GB space requirement?
For people with 120gb SSDs that are soldered to the board, this is still going to hurt.
I really can’t understand the thought process that went into making this change. A mandatory 7gb loss of user storage. It’s going to be very unpopular and generate a lot of bad press about windows 10. I know this is bluster you probably hear a lot, but this change makes me wonder what other major changes in the way I can use my machines going forward. It’s very seriously enough to make me consider purchasing a mac as my next laptop.
I just love it so much that although my pc will be able to handle it, that my notebook will probably die as it have only 32Gb of memory and even though i have only about 8 GB of my files on it i already have only a little bit less than 4 GB left because windows already have almost 15 GB from those 27.8 GB of space taken from me… As if it wouldn’t be bad enough to not be able to disable system updates on Win 10 Home so that i would have even a little bit of memory for my personal use or wouldn’t have to worry about it dying because of some updates which i luckily can’t even install as there was update some months ago which caused my notebook to crash after about 4 minutes luckily i could uninstall it… A light in the dark that i don’t have memory to install that particular update so I’m safe for a moment… But i wonder what would happen if by some very bad luck my computer would somehow update itself as win 10 is famous for being able to so and lost that 7 GB of memory because it needs to reserve it so that people would be a little less angry about being a laboratory rabbits… Please remember that not every computer with win 10 have 1Tb of space so it can lose 7GB or that some computers have very weak specification as this notebook have 4 cores 1.33Hz and only 2 GB of RAM and this RAM is shared with integrated graphic card so im already amazed that it even starts up on this specification and i wonder who was so stupid to install win 10 OEM on it when they produced this computer… I don’t care about this comment so you can delete it but i just wanted you to know about that what i wrote above and that it’s not very good idea to force every win 10 to reserve some space just for “being safe” which probably will just destroy another part of system…
Yeah…. I’m not okay with this. It’s one thing letting you take control of my PC for the sake of ‘public good’ with forced updates to deal with security threats.
It’s another entirely to let you just steal disk space. Doesn’t windows already use enough disk space?
I’m not going to be going along with this. Over the last few years, you’ve been gradually giving me more and more reasons to make a more permanent switch to linux. This will be the last straw. With one exception.
The option to reduce reserved space to ‘zero’ is an absolute necessity. If I can’t opt-out of this, you can kiss this MS customer goodbye permanently. I know how to use WINE.
Hello,
1. It will apply to new install computer only? or both (old and new).
2. Reserve 7GB disk space also include Windows.old when doing In-place upgrade? or it’s separate.
At this time, reserved storage will only apply to newly manufactured PCs with the 19H1 version of Windows 10 or clean installs of the 19H1 version of Windows 10 on existing PCs. Those updating to 19H1 from a previous version will not see reserved storage.
Whether the size of reserved storage includes Windows.old is a great question. In short, the answer depends on how much free disk space a PC has and which path it follows when installing a feature update. More information about Windows’ multiple feature update paths can be found in the “Why does the amount of space required to update Windows vary so much?” section of this support article.
When a PC reboots to finish installing an update, we record the amount of free space available to the user. After the update, we take this value into account when deciding how to set the size of reserved storage. As best we can, we try to maintain the same amount of free space after the update. If there were 20Gb of free space just before rebooting to install an update, we want to leave as close to 20Gb as possible after the update.
How close we get to achieving this goal and how Windows.old affects these calculations varies due to several factors, including which feature update path the PC follows. For example, on devices with low disk space, the size of reserved storage will be reduced immediately after a feature update because of the additional disk space consumed by update after the device reboots. The size of Windows.old will be accounted for when defining reserved storage in these cases. When Windows.old is deleted later (either automatically or manually in Storage settings), the space occupied by Windows.old will first be reclaimed for reserved storage before becoming general free space.
While we try our best to maintain free disk space before and after the update, it is important we keep PCs up-to-date. To ensure devices continue to receive quality updates before Windows.old is deleted, we set reserved storage to a size that is large enough to accommodate them.
> .”
Definitely the attribute lists are shrink blockers. These is the case when there are compressed files on the volume. I get TONS of attribute lists spread everywhere on the disk that come from the fact they were allocated *outside the MFT* (because they were too large for the 1KB MFT record which includes the required filename and basic attributes), only to place the file allocation records somewhere else (these records are created by the NTFS WOF compressor). Even though these files are NOT open, you cannot move their attribute lists at all, thy remain highly fragmented everywhere.
I think this is a bug in NTFS defragmenter API or in the WOF compressor which permanently locks these records (not making them visible for something else, as if they were not there because the compression is sort of “virtualized” and the effective storage locations is somewhere else, but not in the basic allocation records in the MFT does not include the allocation records for the physically compressed data, which seems to use its own internal bitmap and different placement strategies).
So sometimes, it will becoime impossible to shrink the NTFS volume (not even offline: by booting a DVD or USB, and then not mount the volume with a drive letter; thjese fragments are unmoveable even in safeboot mode, they are locked internally inside the NTFS driver, or it refuses to move them elsewhere even if there are NO open file handle locking these files). So we can end up with a giant NTFS volume (several Terabytes) with only about 100GB used, and it’s still impossible to shrink it down with Windows.
The only solution I saw was to use an external tool running Linux with the NG NTFS driver (which successfully moves all these damend fragments).
On several systems I’ve seen also the MFT highly fragmented (by tens of thousands of these unmovable attribute list fragments spread in the MTF area or MFT zone), and the MFT zone placed near the end of volume and not placeable anywhere else (this meakes the volume also inshrinkable).
This is very problematic when we can’t reduce the system drive on an SSD, even if there should be large enough place to put a secondary system image (e.g. another guest OS in a VHDX drive stored on a secondary volume, outside the “C:” volume, and we don’t want the VHDX itself to be part of the regular system backup of “C” because we already have backups of VHDX stored elsewhere; but we want the “live” VHDX volume to be on the SSD adn not from the slow backup volume on external/network/RAID drives). Most people don’t have several SSD, but would like to partition their SSD (not suitable if it’s a 64GB SSD or lower, but there are now 128GB or higher SSD for which this is very desirable).
So please allow us to shrink these volumes and find a solution for these permanently locked, non-resident (outside the MFT) attribute lists, at least with an offline tool (or a special NTFS mount option usable in safemode, i.e. while booting from a DVD, or USB flash drive, or from the recovery environment in ramdisk).
Note that there’s NO problem for iunfragmenting the MFT itself, or the first clusters of the mirror. The only fragments that shbould be unmoveable are (of course) those of the “$BadCluster” special file (which is normally permanently unreadable/unwritable, except when using CHKDSK in order to try recovering them which generally fails and causes even worse damages).
Also is there a way to dismount the “PortableBaseLayer” container volume (at least temporarily) which is mounted from a VHXD whose content is also unmaintainable and permanently locked ?
On my system this VHXD takes 7.9971 GB. But Windows Settings>Storage shows that it only has 3511 MB reserved in reserveID=1 (but 0MB used), 536MB in reserveID=2 (163MB used), and 0MB reserved in reserveID=3 (0MB used). So I asume that it is actuall allocated as a dynamic VHXD (7.9971GB is just its maximum).
This VHXD is not properly setup.
The problem is also affecting other containers whose storage on the system volume “C:” is also permently locking fragments everywhere on the volume. They cause nightmares to manage the “C:” volume itself. I wonder why there’s no possibility to place these VHXD somewhere else (no on the C: volume itself, except possibly the reserve for system updates). An offline mouting option should also allow cleaning up the volumes and correctly manage the partitions: people want to have more freedom for other partitions indepenant of the system (e.g. for Linux parittions, or database storages, that Windows will never use for itself). It is important here also for managing backups, or for ease of deployment of images.
You seem to assume that everyone uses a single volume or has a giant JBOD array where placement of files don’t matter where there are plenty of drive interfaces/buses and lot of caches in RAID controlers. Generally many people have a small SSD and an external disk or small RAID array of hard or hybrid disks for their data and they want to manage the placement (they also want it for ease of deployments: copying small partitions or reformating them without loosing all the data that is actualyl stored in other partitions).
Even user profiles should not be automatically in the “C:” volume (which is generally small on a SSD: typically Windows, basic softwares, or office softwares, and UWX apps, system registry hives, plus pagefiles and swap files, plus local administrator profiles rarely need more than about 45GB, Windows Update needing about 16GB, not 7GB, and a 64GB SSD should work…). But user profiles tend to be much larger (notably users that manage database services, or users that have gmaing profiles and that will want to store their UWX or gaming library or collections of videos/movies/musics elsewhere without fearing their will exhaust the “C:” volume). Developpers want also to place their local repository elsewhere (and accessible possibly from several boot environments): the “C:” volume should remain under a reasonnable limit.
I really suggest that Windows proposes to create all user profiles on another partition if the volume is large enough (a second partition should be used on all disks that are 128GB or larger, allocating 64GB only for “C:” and placing everything else (except system profiles) on another. Unfortunately, Windows never asks where to create the user profiles, even if there are other partitions or disks, and it still wants to place the paging file and swapfile on the “C:” volume (and on systems with about 8GB of RAM or more, the swapfile, swap file and hibernation file can be decently placed on a hard disk, outside the SSD containing the “C:” volume).
The storage manager lacks many configurable options, and even if we instruct it to place user files elsewhere, eah new Windows image resets ALL these options to default without ever asking, this must be reconfigured each time !
Finally I wonder where you took the estimate of 7GB needed for upgrades: actually every upgrade requires MUCH more, it is at least 20GB (including the downloaded image, Windows.old, all journals and logs, and system snapshots). The “preboot” downloads, logs, temp files also should be placeable elsewhere (they only take place on the C: drive even if there’s ample enough space elsewhere), they are not needed on the system disk to perform its upgrade.
And the BCD store is frequently left garbled with invalid entries at each Windows image upgrade (look at “BCDEDIT /ENUM ALL” after the install) and generally leaves corrupted entries about the system recovery; sometimes this causes the recovery option not working at all or failing to locate the system, or failing to locate the user registry hives, or swapping drive letters incorrectly. Each time I need to perform several “BCDEDIT /DELETE {entry}” and fix a few ones (those showing “unknown” locations).
The ReAgent also alse troubles installing itself (I’ve seen cases where the major upgrade forgot to update the winRE, failed to locate its partition after deleting its content, leaving no “winre.wim” anywhere: you should provide a way to restore from Windows Update an image that can be reinstalled on the recovery partition; but there procedure is much too complicate: you need to download the latest ISO, then mount the full WIM on disk, which requires lot of storage space, only to extract the 400MB winpe.wim we need to reinstall on the recovery partition). It is documented but very difficult to find in MSDN, where basic settings are spread on lare unmaintained pages with lot of obsolete contents. And Microsoft does not provide any “troubleshooter” to help restoring a missing/corrupted recovery partition or an error reported by “Reagentc /info” (generally because the BCD entry is not up to date or links to phantom “unknown” partitions). These are really difficult to diagnose and fix (and requires very technical commands that most users can’t use themselves, with many complex GUID which are impossible to remember).
Another suggestion: fix the “BCDEDIT” tool to add a few missing “wellknown” GUIDs: notably “{resume}”, “{recovery}” and “{recoveryvolume}” (in addition to “{current}”): this would facilitate reparation and cleanup of the BCD store with simpler commands (these aliases should be linked to the “{current}” entry which reference them.
The PortableBaseLayer volume is what Windows Sandbox and the Windows Defender Application Guard for Microsoft Edge boot from in recent 19H1 preview builds. This volume has nothing to do with Windows Reserved Storage at all, and we’ll soon hide it from Disk Management, Defrag, and BitLocker. If no Windows Containers are running and you stop CmService, the volume will be dismounted and the virtual disk detached.
How’s this reserved storage work on systems with limited disk space? My linx tablet doesn’t have more than 4gb free disk space. Are you taking the 7gb from space already allocated by the OS for updates? Can this reserved storage by a non system drive?
I like the idea. Sort of a “son of swap file”, but for a higher level processes. Is the reserved storage directly accessible? Frequently, I’ve needed to access temporary files for troubleshooting purposes. Will this still be possible?
I wonder about devices with low storage (tablets with 32GB). What about these ones?
Will it be possible to change the location of this reserved storage? For example, in the case of a computer like my laptop with 2 disks: 1 x 128 GB “SSD” (for OS & programs) + 1x 1TB “HDD” (storage for data), to be able to set this storage space reserved on the “HDD” and not on the “SSD”? It would be really better.
Will this also apply to Windows systems that upgrade to 1903, as opposed to just new “devices that come with version 1903 pre-installed or those where 1903 was clean installed?” So for example, if I’m running Windows 1809 and upgrade to 1903 when it comes available (via Windows Update), will it set aside disk space at that point?
This will not apply to PCs that update to 19H1 from 1809 or previous versions of Windows 10 through Windows Update.
At this time, reserved storage will only apply to newly manufactured PCs with the 19H1 version of Windows 10 or clean installs of the 19H1 version of Windows 10 on existing PCs.
Hi Terry I would like to recommend the following:
Include a switch in settings app > storage to enable or disable reserved space instead of a registry.
For devices not controlled by domain / intune it should be enabled by default, if enough free space (eg. 20-30 GB) is available on C: via a next CU before 19H1 releases. It should not be disabled if the free space is below to prevent “low disk space” messages.
The feature is too helpful to make it enabled only for a handful of new installed devices. Since Windows 10 is reliable people usually never reinstall the Windows anymore.
Will admins be able to inspect contents of, manage, copy from/to, remove contents from, or totally reclaim/kill off the reserved space?
Not sure how this is useful. On my Windows 10 tablets with 32 or 64 gb storage having any apps loaded at all means major updates require a usb key to execute without running out of space and semi regular cleaning with the disk cleanup tool and cc cleaner.
Other better ways to ensure space would be stop pre loading office and crapware apps. Auto remove temp files and clean up after updates better.
I am all for reducing Windows size by reducing its footprint. Google’s latest backup and GSuite download has been a savior on space management and function. I do commend the latest changes to Onedrive for helping as well. This new “feature” seems like a major step back.
I have 3 questions and would love to get some insight on those:
1: Will this affect Windows-Server, that install that update, in the same way?
2: What happens on a client if before installing the new update there is not enough space available to reserve those like 7 GB?
3: Will i be able to deactivate this reserving storage in a company-domain via like GPO?
Thanks in advance for any answer or hint
I’d posted a comment before but it never got on here 🙁 Must have been moderated out? Not sure why.
My interest is in low-disk space systems. such as the Linx 1020 which will never have 7GB free for reserving. How will you “deal” with those systems going forward? I know you’re not enforcing reserved storage at this time but how the heck would it ever work for those systems like mine which struggle to have 1GB free.
According to blogpost, the feature starts on build 18298.
Current ISO insider build is 18290 on slow ring. Is the feature showing up in that build? If not, will in-place upgrade to fast channel enable the feature?
Thanks.
Is there a way to query the amount of the reserved storage and the amount / percentage of data residing in the reserved storage via the command prompt (fsutil) or wmic or PowerShell?
Hi Vadim, yes!, ‘fsutil storagereserve query C:’ will show you a summary of the reserved storage areas defined on C:, including how big they are defined to be and how much space is currently being used in each. I don’t believe there’s anything in wmic or PowerShell at the moment.
Hi Craig! Thanks a lot, I see three reserved areas on my system, and one of them is 0mb. What do these areas represent? Different types of data, like temp files, optional features, and languages or…?
Also, could you please clarify the meaning of the registry parameters for the ReserveManager key?
I suppose MinDiskSize being set to 20GB means the reserved storage is not enabled on smaller disks.
What about these?
BaseHardReserveSize
BaseSoftReserveSize
MinUserSpace
Hi Vadim,
Unfortunately we don’t have any specific details to share about how each reserved area is being used. Between reliability updates, feature updates, features on demand, language packs, offline servicing, servicing from media, etc., etc., you can imagine there are a lot of different servicing scenarios that each operate a bit differently. The short but perhaps unsatisfying answer is the OS uses these areas as needed according to the scenario. The specifics will evolve over time. The reason there are multiple areas instead of one is simply that they can be managed separately which has advantages. For instance an area might only be relevant during a brief period, and when not needed we resize it to zero (essentially disabling it), which explains what you are seeing.
Those registry settings are for internal use by Reserve Manager (a new servicing component whose job is to manage the reserved areas). They are not meant to be tweaked by hand. I would strongly discourage you from altering anything there. At best this could negatively impact the servicing reliability goals we are striving for, and at worst this could cause unforeseen issues.
I appreciate your excitement and interest around this area!
Is there any windows performance benefit to this, either through memory management, temp files, example like temp internet files? Are they still in the normal location under reserved space or has it all moved?
I’ve read this twice and every time I’ve gotten more angry about this. Between the fact that your update have historically nightmarish and poorly done (a well documented history back to Win 95) or the fact that corporately Microsoft seems to believe they should have more control and decision making power over my system then I do and thereby feels justified in taking storage with permission or agreement for windows user in order to force us to update in a manner and timeline decided by the corporation.. are you kidding? The unconscionable manner in which Microsoft bullies their customers, assumes their correctness and right to decide for what and when they make changes is beyond reprehensible. There is NO justifiable reason to approach the matter of updates and consumers choices in this way. I’m done with this company, I’m going to move back to win 7 until I can make a smooth transition to Linux and get rid of Windows. Corporately, you just dont get it…you dont produce a good enough product to be bullying customers and frankly…the competition makes windows less and less attractive. For once, trying asking or better yet….give your customers the option…you might be surprised and you certainly wont like like gargantuan jerks like this latest moves does.
Is this going to be an automatic forced download? As someone that lives in a rural area that relies on an ISP which imposes a download limit and charges if I go over that limit I want to know. Should it be forced then I’ll keep this abortion of an OS offline and rely on my Windows 7 PC for anything internet related. Oh, and in case you were wondering why I have a W10 computer it’s to play VR games and nothing else
How does this affect the SXS folder in the Windows subdirectory? One would logically think that since this folder is used for storing updates (among other things), that the new reserved space would reduce the space used by the SXS folder, and thus offset the impact of losing additional storage space for the reserved storage area.
WIP Build 18334.1 (19h1)- Posted feedback on WIP Feedback Hub on this issue, as follows:
Quest announcement gives instructions to enable the new ‘Reserved Storage’ feature. Following these instructions to the letter (see attached graphic), there is no ‘Reserved Storage’ option, available in setting,s to enable. Please let me know what I am doing wrong or if this is just another anomaly. (Recorded process attached).
While I can’t attach the referenced screenshots basically, these are the instructions as noted here as well.
Enable Reserved Storage
Note: To complete this Quest, your device must have Windows 10 Insider Preview Build 18298 or greater. Please update before proceeding further.
1. Right-click the Windows icon on the taskbar, search for Registry Editor, and Open it.
2. If prompted, select Yes to allow the app to make changes to your device.
3. Select HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\ReserveManager.
4. Right-click ShippedWithReserves to modify and update the value to “1.”
5. After you upgrade the device to the next available build, you will be using reserved storage!
6. Follow these steps to check the reserved storage size:
a. Click the Windows icon > Settings > System > Storage .
b. Click Show more categories > System & reserved.
c. Look at the Reserved storage size; the value should be non-zero.
As noted in my feedback, after completing the registry edit as instructed in 1-4, above, the settings for System & Reserved do not have ‘Reserved Storage’ option noted. Only System Files, Virtual Memory, and Hibernation FIle are listed. As the WIP Feedback Hub is pretty much one-sided, Insiders rarely get responses to reported build issues, but rather they are collected by the MS WIP Team, to improve the next build, which is why I came here, in hopes of getting a response to this problem. Any input on this issue will be appreciated. Thanks.
I see a slight problem with the reserved storage “feature” coming in the next version of Windows 10.
The problems is of devices that have limited storage (i.e. low end systems that ship with 32GB of storage).
I currently have a low end Windows 10 tablet and with light use, I normally able to get around 7.5GB free but if you’re reserving 7GB I see a problem with those devices of lacking even more space. I was wondering if it would be possible to lessen the space to be reserved or move some of it to a different drive instead of lumping it all on the C: Drive. | https://blogs.technet.microsoft.com/filecab/2019/01/07/windows-10-and-reserved-storage/?ranMID=24542&ranEAID=je6NUbpObpQ&ranSiteID=je6NUbpObpQ-Bm0R4hDqE.77O37wmaQ1Vg&epi=je6NUbpObpQ-Bm0R4hDqE.77O37wmaQ1Vg&irgwc=1&OCID=AID681541_aff_7593_1243925&tduid=(ir__jtkw3requwcydmhwe9odmoh1e32xhnqp0tiprdsb00)(7593)(1243925)(je6NUbpObpQ-Bm0R4hDqE.77O37wmaQ1Vg)()&irclickid=_jtkw3requwcydmhwe9odmoh1e32xhnqp0tiprdsb00 | CC-MAIN-2019-30 | refinedweb | 6,705 | 58.11 |
There are following types of JavaScript Properties in Lightning Web Components:
- Private and Reactive Properties
- Public Properties
- Getter
1. Private and Reactive Properties
Private properties are only accessible in the component it is declared. It is declared using only an identifier name. We don’t have to provide any keyword for the datatype. We can also assign a default value to Private property.
Syntax for declaring Private Properties:
import { LightningElement } from 'lwc'; export default class JSProperties extends LightningElement { // This is Private property. strName = 'Nikhil'; }
In the above example, we have declared strName as Private Property. We can access any property in the component as {strName}.
My name is {strName}
Previously, Private properties were not Reactive by nature. In the Spring ’20 release, all Primitive Private properties are made Reactive. but what are Reactive properties?
Reactive Properties
When the value of the property is updated in JS Controller, it should be reflected on UI as well. Only values of the Reactive properties are reflected on UI once it is updated in JS Controller. Primitive Private properties are Reactive by nature but in order to make Non-primitive properties Reactive, we have to use a track decorator. So it creates a one-way binding from JS Controller to Component.
First import the track decorator in our JS Controller and use @track while declaring property to make it Reactive just like below:
import { LightningElement, track } from 'lwc'; export default class JSProperties extends LightningElement { // Non-primitive property. @track lstOfNames = ['Nikhil', 'LWC']; }
So whenever the lstOfName property is updated in JS Controller, it will be reflected on UI.
2. Public Properties
The value for the Public properties is passed from the parent component. Hence we can only initialize the Public property in component with a default value where it is declared. After that, it is Read-only. Its values should be passed from the Parent component only. We can’t change its value in the component where it is declared.
To declare the Public property, we need to import the api decorator and then use @api while declaring the property to make it Public. Public properties are Reactive by default.
In below example, we have created Public property and assigned default value Friend to it.
import { LightningElement, api } from 'lwc'; export default class ChildCmp extends LightningElement { // Public proerty with default value. @api strPublicName = 'Friend'; }
But in the below example, we are passing the value Nikhil from the Parent component, and it will be considered as the value for strPublicName.
<template> <lightning-card> <c-child-cmp</c-child-cmp> </lightning-card> </template>
If you notice the name of attribute str-public-name for the Child component, it is in the Kebab case. It is recommended to have the property name in JS Controller as Camel case because the attribute name must be in Kebab case when used as an attribute on the Child component. Even the Component names should be in the Kebab case when using in another component just like c-child-cmp.
Public Boolean Properties
Boolean Public properties work as normal HTML 5 Boolean attributes. When passing value to the Boolean property of the Child component, we just need to mention the name of a Boolean property in the Child component tag as an attribute like below:
<c-child-cmp str-public-name='Nikhil' bool-show-name></c-child-cmp>
Here, the bool-show-name is Public Boolean property. There is no need to assign a value to it. If the Boolean property is mentioned, its value will be True even though we assign its value as False explicitly. Hence if you want to pass True, just mention the property name as attribute name in the Kebab case. If you want to pass False, don’t mention the property name at all. Hence it is also important to assign the default value as false in JS Controller like below:
@api boolShowName = false;
3. Getter
A Getter is used to compute the static values. Getter is defined in JS Controller just like methods but prefixed with get keyword like below:
get accountName(){ return 'My name is ' +this.strPublicName; }
To access this Getter on UI, we just need to use {accountName} and it will display My name is Nikhil on UI considering the value of strPublicName is Nikhil.
If you want to check more posts about Lightning Web Components, you can check it here.
This is all for the JavaScript Properties in Lightning Web Components. In case you don’t want to miss a new post, please Subscribe here.
You can follow me on below social media. Thanks !
3 thoughts on “JavaScript Properties in LWC (Lightning Web Components)”
Hi Nik,
You have mentioned for public properties that “We can’t change its value in the component where it is declared.”
I think this is not correct! Can you please check once.
Hi Rohan, Yes we can update but it is not a good practise. In general, it is a bad practice to override a public property from within a component as it goes against the unidirectional dataflow model. That said, the LWC framework prevents mutation to objects passed as public property. If a component receives an object via public property, any property addition, update or delete on the object will result in an Invalid mutation error. | https://niksdeveloper.com/salesforce/javascript-properties-in-lightning-web-components/ | CC-MAIN-2022-27 | refinedweb | 880 | 54.63 |
Chapter 18. Drawing with Shapes
What You’ll Learn in This Hour:
Using Shapes
Drawing with the Path element
Understanding stroke and fill
Working with complex shapes
Just like the rest of WPF, the API for 2D drawing is both extensive and powerful. Our goal in this hour is to equip you with the basics you need to start working with WPF quickly, as well as providing a foundation for learning if your projects have a need to go deeper.
Drawing Basic Shapes
The WPF has a number of basic shapes built in. The shapes are
Line
Polyline
Polygon
Rectangle
Ellipse
Path
All these classes live in the namespace
System.Windows.Shapes.
Lines and Strokes
The best way to understand how these shapes work is to see them in action. Let’s create a simple project for the purpose of understanding ...
Get Sams Teach Yourself WPF in 24 Hours now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers. | https://www.oreilly.com/library/view/sams-teach-yourself/9780768678796/ch19.html | CC-MAIN-2020-29 | refinedweb | 171 | 67.49 |
ENIGMA compiler
From ENIGMA
ENIGMA's compiler takes EDL and compiles it to C++.
Process
Before the process begins, the compiler is already aware of the target platform, the necessary make calls, and the variables, functions, and other important definitions from the C++ engine. The process begins with this information tucked away in global memory.
First and foremost, the compiler tosses around resource names and makes declarations for them, adding them to a new virtual namespace allocated for this compile. This includes minor code generation for instances. From there, it begins lexing all the code and does some parse operations such as adding semicolons (see page on Parser for details on lexer and preliminary parsing), and then takes note of all the types that are declared locally in each object and script. At this point, the compiler has a structure for each event in each object and for each script, containing the code, the lex string, and a list of the variables it declares for each scope: globally, instance-locally, and via dot-access.
From there, it looks at which objects make what calls to what scripts, and which scripts call what scripts, in a complex resolution pass that results in a list of every script that could possibly be invoked by an object. Using that resolved list, the compiler scopes the scripts into the appropriate objects and then starts at the bottom and works its way up, gathering variables used by any script or event. The results of this pass are a comprehensive list of both scripts invoked by and variables used in each object.
Using the list of used variables and scripts for each object, the compiler can make choices on where to scope scripts and objects, be it solely at the global scope (using with() where necessary), at the parent-object scope (where all objects will inherit it), or at the individual object scope.
From there, the compiler conducts a second pass, using its newly gathered information to resolve access routines and other heavily context-dependent mechanisms of EDL, many of which involve heavy code generation. Dot-based access of form
a.b, where 'a' is an integer, resolves to either
enigma::glaccess(a)->b in the case of shared or "global" locals, or
enigma::varaccess_b(a) for strict locals. It is up to the compiler to generate these functions.
- First, the compiler must isolate a type that will represent 'b.' It does this by crawling objects to find any that declare it explicitly.
- If all objects agree on one type, it writes an access routine for it and allocates a dummy to be returned to prevent segfault.
- If they do not agree, it bitches and does so anyway
- The accessor function switch()es the object index. It then makes a case for each index that contains the correct definition of someVariable.
- The default case returns the anti-segfault dummy for that type; it is declared exactly once in form
static someType dummy_someType;
After that, the hard work is basically done on ENIGMA's part; it uses the lex buffer to dump the code buffer into the specific files under ENIGMAsystem/SHELL/Preprocessor_Environment_Editable/ so it looks nice, meanwhile adding the strings and other collapsed sections back in. At this point, it is compiled to C++, and it is just a matter of invoking the GCC on the produced code. Native compiler invocation is done through Make; when that process finishes, the game is officially natively compiled.
From there, the compiler simply tacs resource data onto the end of the executable, or where requested by Compilers/*/compiler.ey. If requested, the compiler will then invoke the game.
Code Generation
Getting GML to bode well with a C++ compiler is obviously impossible without generating some additional code to be compiled with the game. The code often fills large gaps in the ENIGMA engine. Among the various code pieces generated to get a variety of games to compile are the following:
- A switch statement is generated for use by instance_create(). Since class ids cannot be enumerated in an array in C++, the switch statement pairs each object index with its own
newstatement.
- A framework of structures is generated for each object. Locals and scripts are then scoped into each structure as appropriate.
- An accessor function is generated for each local variable accessed as
object.local_variable.
- A common-class cast is generated to allow instance_change to be implemented; each object has a method to cast to the common class and a constructor from it.
What needs done to the compiler
- Template type tracking: The C Parser needs to keep track of all template instantiations. This may involve creating an instantiation scope in each template, or creating an instantiation parameters list in each object.
- Default flag: All searchable objects need to have a flag set so a special case doesn't need made for 0xFFFFFFFF flag search in the C Parser.
- Constants and enums need flagged as such: For future items on this list to work, the "const" keyword needs acknowledged.
- Flag pair "local const" needs special treatment: Local constants should be initialized in the constructor instead of set inline to avoid errors.
- Local array bounds need coerced: To permit having a local array of variable-sized dimension, array subscripts should be determined to be constant or variable. Constant subscripts should remain in the declaration, variable subscripts should be replaced with * and allocated in the constructor.
- Switch statements need coerced: To allow for a more efficient switch statement, the types of the switch value and of each case label should be coerced. There is only one switch value type, the key type. Since there are typically multiple case labels, the worst type used in any of the switch()'s case labels will represent them all. The "best" type is the smallest integer type, then largest integer type, then any floating point type is bad, and the "worst" is any string or variant type. The case type is considered
constif and only if all of the case label types are constant. Scenarios for (key,case) type pairs are as follows (??? indicates that the type is irrelevant, all const types are denoted as such):
- (int:const int): The statement is left alone completely.
- (???:const ???): The statement is replaced with a hash function and integral keys as the case labels. An if() is placed in each case to make sure the hash was accurate.
- (???:???): Regardless of switch value type, if the case types are not all constant, the
switch()must be replaced with consecutive
if()s.
- Locally- and globally-declared array subscripts need special treatment. Variables marked "const" need to be declared first; of those, local consts need initialized via () in the constructor. It'd be a good idea to allow = for in-place construction and () for in-constructor construction.
- eYAML files of locals need acted upon: Ism presently has a mechanism by which she can look up alarms in separate sources. Files like the one she created manually need generated automatically by ENIGMA in accordance to the eYAML files under Extensions/.
- Variable tracking mechanism needs implemented: In accordance with the eYAML files mentioned above, a system needs implemented that can execute certain code at the end of events in which it is possible that a value may have changed. This is useful for establishing spacial containers for speeding up the collision system.
- The options in the LGM ENIGMA settings pane (and the ones that were requested but aren't there) need implemented. This is actually relatively trivial and not worth naming, but a couple not listed are as follows:
- Scripts should have two modes for max efficiency; either being placed in the global scope and var accessed via a with(), or being scoped into each object that uses them (this is the current behavior)
- Global array types should have two type options: pointer or var (many people use view_xview without an array subscript, which will error for int* but not for var).
- Switch() should have an option to use strictly GML or strictly C methods.
- There needs to be an option for = vs == treatment in conditionals and parameters.
Toolchain Calls
To allow compilation of games for all platforms, and to allow cross-compilation, a system needed incorporated for compiler management. Though the About.ey files allow for some specification of system dependencies, compilers need to be delimited in a manner in which they can be looked up by the name of one of the three operating systems on which the IDE can run. In other words, a directory called Compilers/ must be kept containing a folder for each of Windows, Linux, and MacOSX. In each of those child folders, an eYAML file must be kept specifying fundamental information needed to call the toolchain executables. | https://enigma-dev.org/docs/wiki/index.php?title=ENIGMA_compiler&redirect=no | CC-MAIN-2019-51 | refinedweb | 1,460 | 58.01 |
Issues
ZF-8669: Zend_File_Transfer: Replace constructor with factory method
Description
Instead of the proposed patch in ZF-8668, I figured that ZFT would be better suited to have a factory() method for instantiating and returning a ZFT adapter (similar to Zend_Db::factory()). This proposed patch replaces the current constructor implementation with a Zend_Db::factory()-like factory method that accepts two parameters:
- $adapter - the name of the adapter, relative to the Zend_File_Transfer_Adapter namespace (i.e., 'http')
- $config - Either a PHP array, or Zend_Config instance, of configuration options.
The $config supports all of the options that the ZFT adapters support, and also two ZFT-specific options:
- adapter: the name of the adapter to use
- adapterNamespace: a user-defined namespace to use in place of the default 'Zend_File_Transfer_Adapter' namespace.
Side Note: I have read the proposal at… and feel that this proposed patch still fits within the guidelines. The only difference is how the adapter is gotten: the proposal incorrectly implies that a direct instantiation would return an adapter, when it cannot. Thus, the factory method. The use cases would need to be updated to reflect this change. If there is more that I need to do to have this patch accpeted, then please advise.
Posted by Thomas Weidner (thomas) on 2009-12-31T05:34:52.000+0000
Closing as won't fix.
A factory method does not allow to have up- and download adapters attached. This would negotate the futural benefit of this component.
Posted by Ken Stanley (dohpaz) on 2009-12-31T06:44:39.000+0000
How do you figure? The whole point of the factory is to instantiate adapters. The entire concept was borrowed from Zend_Db, which has many adapters. For example, currently with the Zend_File_Transfer_Adapter_Http, you simply call Zend_File_Tranfer::factory('http', $http_adapter_options_array) to get the instance of the adapter. Similarly, when there are other adapters (such as the upload and download that you mention), you would call Zend_File_Transfer::factory('upload', $upload_adapter_options_array) and Zend_File_Transfer::factory('download', $upload_adapter_options_array).
How does this negate the "futural" benefit of this component? As it stands, the current implementation of Zend_File_Transfer::__construct() does not work. So really, this factory makes any "futural" benefit possible. :)
Posted by Ken Stanley (dohpaz) on 2009-12-31T06:49:35.000+0000
Additionally, with the extra 'adapterNamespace' parameter that may be passed to the options array, it allows extending existing adapters for full OO customization.
I really would appreciate you reconsidering your decision to deep-six this patch. Thank you.
Posted by Ken Stanley (dohpaz) on 2010-01-15T06:54:36.000+0000
I would appreciate a response, thank you. :)
Posted by Thomas Weidner (thomas) on 2010-03-19T13:21:31.000+0000
Closing as won't fix
No benefit for a switch from "new Object" to "factory()". It would even add a BC when the constructor is made protected and its no benefit to have 2 methods to initiate a object the same way.
Just because another component uses this pattern it does not mean that it's required for this component.
Posted by Ken Stanley (dohpaz) on 2010-03-21T15:12:26.000+0000
The problem is, you absolutely, positively CAN NOT return ANYTHING from a constructor. Therefore, the current implementation DOES NOT WORK in its current state. Therefore, the most logical solution is to convert the constructor into a factory method. So yes, because another component uses this pattern does mean it is required for this component.
Posted by Thomas Weidner (thomas) on 2010-03-21T15:40:31.000+0000
Following your conclusion means that Object Oriented Programming itself is completly useless as you think that the constructor of an object does not return anything.
In my understanding a constructor returns at last a new instance of the object which it has to create. Otherwise "new Object" would not work.
Posted by Ken Stanley (dohpaz) on 2010-03-21T18:12:34.000+0000
Have you not looked at your code? In the constructor you are attempting to return the adapter (not the constructor's class), which cannot be done. The changes that I propose would allow you to be able to instantiate one or more file transfer adapters, where as your code currently cannot, and does not. How does changing the constructor to a factory not fit the original intent of this class? It seems to me that you haven't even looked at the patch, and have simply made up your mind not to even give it any consideration.
Posted by Thomas Weidner (thomas) on 2010-03-22T12:04:56.000+0000
Closing as won't fix.
As mentioned before, your code breaks existing functionality. And planned addons are made impossible by your change.
I don't know from where you have the opinion that the constructur returns an adapter. It returns a new object of itself.
Posted by Ken Stanley (dohpaz) on 2010-03-22T12:22:28.000+0000
I have seen the error of my argument. Apparently, there were changes in 1.10.0 that were not noted in this ticket that changed the behavior from when this ticket was originally posted against 1.9.6. The constructor fiasco that I was speaking about can be seen here:…
My apologies. Seeing the updated 1.10.x version, I can see now what the solution that you spoke of is. My apologies for bothering you with this. :) | http://framework.zend.com/issues/browse/ZF-8669 | CC-MAIN-2016-22 | refinedweb | 891 | 57.87 |
Java provide support for GUI in its core packages. AWT is one of the basic packages. It gives all you need to create a graphical software using frame windows. It provide buttons, scrollbars, text areas, text fields and many more components
Introduction to AWT
AWT stands for Abstract Window Toolkit. This relates to the branch of Java which is related mostly to the Graphics processing. You can create the GUI interfaces and create event based action on that interface.
Note: by interface here we don't mean Java Interface.
There are several classes in AWT package which will help you to do all the necessary steps before we start coding let us give you an overview of AWT important and commonly used classes.
Note: AWT is the base for the Java swings framework too. All the classes in swings extends from awt classes directly or indirectly.
Component: is the root class or say parent class of all the other classes in the window environment. The Component is made a parent because as its name suggests it satisfies the is-a relationship. That means Frame is a Component or say Container is a component. In this way.
Container: The container is sub class of the Component. Every graphical object will be hold in the container.
Panel: This is direct child of Container with no much difference. The applet is drawn on a panel. The panel serves tha base for the graphics drawing.
Window: this is nothing more than a subclass of container but it acts as a base for the GUI application just like the panel does.
Frame: This is going to be most important class for you as this helps to identify a graphical interface differently. It provide the boundaries and borders for the window based application.
Note: by window we don't mean MS windows operating system. You must remember the key feature of Java is platform Independence. The window is a graphical applet which displays the components on it.
Beside these main components there are several other classes Menu, MenuItem, Button, TextArea, TextField, Image, Label, ScrollPane, Font, Graphics, Robot, Event, Dimention, List, RadioButton, CheckBox etc. These are the components we see in our day to day life with GUIs.
Working With Window
The window we are talking about is the Framed Screen of any Application which helps us to do some operation to that software application. The general window contains the frame which provide it the close, minimize and restore buttons at the top. The re-sizable border which also separates it from other framed windows. You can also bind the event with the frame or window, using the event listener interface WindowListener.
In the first program we are only going to develop a framed window which will display a message. for this purpose we will use the Frame class as the superclass. Further code is as in following program.
package GUI; import java.awt.*; import java.awt.event.*; import java.awt.event.WindowAdapter; public class FirstFrame extends Frame{ FirstFrame(){ super("First GUI Frame"); setSize(500, 500); setVisible(true); setBackground(Color.black); setForeground(Color.white); addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent we){ System.exit(0); } }); } public void paint(Graphics g){ g.drawString("something", 50,50); } public static void main(String[] args){ new FirstFrame(); } }
So here we are with our first ever hello world program in graphical environment. So lets us see the result and then we can worry about the working of the code.
So as you see this is a framed window of dimension 500x500. the title as we have set is First GUI Frame. the frame class poses two constructor, Frame() and Frame(String s). The second one takes the string type of parameter and set it as the title of frame. As in this case we have extended the Frame class in FirstFrame class we used super() constructor to do the work for us. after that we have set the size of the frame window. By default the frame is re-sizable but you can make it non re-sizable by using setResizable(false);.
Note: if your frame look different on the title bar and buttons, don't panic there is nothing wrong probably you are using some other operating system(we are using ubuntu). The look of frame depends on the OS usually.
There is a whole lot of methods provided with the frame class which you can use to customize the look and feel of your graphical application. For instance we have colored the frame background black and the foreground as white.
What you need to understand is that, the frame does not work when you close it using the close button because there is no action bind to that button just yet, so we bind a window listener to the frame and we set its closing event to exit the execution. And yet again the paint method is used to draw on the frame.
This is the first ever application that we used to create a frame next time we will be creating a frame in an applet and we will try and put some buttons and text field on that frame. But do remember you can not add the frame to applet because it is a panel which goes in a different hierarchy, when we said frame in an applet we meant we will initialize the from an applet. If you have practiced this tutorial you can move on to Working With Frames in Java. | http://www.examsmyantra.com/article/68/java/basics-of-awt-and-frame-based-applications-in-java | CC-MAIN-2018-51 | refinedweb | 915 | 63.9 |
Ticket #17316 (closed defect: fixed)
Wrong instruction after single-step exception with 'rdtsc' -> fixed in 6.0
Description (last modified by janitor) (diff)
There was bug 5 years ago (#10947) and was fixed, but in current release still appears. Here slightly modified code with looping 1000000 times around RDTSC call with charged TF. If at least one call does not work correctly, a corresponding message is displayed:
.586 .model flat, stdcall option casemap :none ; case sensitive include \masm32\include\windows.inc include \masm32\include\kernel32.inc include \masm32\include\user32.inc includelib \masm32\lib\kernel32.lib includelib \masm32\lib\user32.lib .data Flag dd 0 Address dd 0 Counter dd 0 szRight db 'Flag Value is right!, address = 0x%lx, counter = %ld',0 szWrong db 'Flag Value is wrong!, address = 0x%lx, counter = %ld',0 szMessage db 256 dup(0) szInfo db 'Info:' .code start: assume fs: nothing test_loop: call @MyCode mov ecx, dword ptr [esp+0Ch] mov ecx, dword ptr [ecx+0B8h] ;;Ecx = Seh.eip mov Address, ecx .if ecx == offset @WrongExceptionEip mov Flag,0 .else mov Flag,1 .endif xor eax, eax retn @MyCode: push dword ptr fs:[0] mov dword ptr fs:[0], esp push 397h ;;Set Eflags popfd rdtsc @RightExceptionEip: ;;Normally,Seh.eip should be pointed here nop @WrongExceptionEip: ;;In Guest system,('Without' VT-X/AMD-V),Seh.eip is pointed here.But 'With' VT-X/AMD-V,Seh.eip is right. cmp Flag, 1 jnz flag_wrong pop eax pop fs:[0] inc Counter cmp Counter, 1000000 jnz test_loop invoke wsprintf,offset szMessage, offset szRight, Address, Counter jmp exit flag_wrong: invoke wsprintf,offset szMessage, offset szWrong, Address, Counter exit: invoke MessageBoxA,0,offset szMessage,offset szInfo,MB_OK invoke ExitProcess,0 end start
(compiled sample attached rdtsc.exe)
For example, in the real world, this misbehavior is used by the vmprotect to detect a virtual machine. I hope there is no good program crashing because of this misbehavior...
Attachments
Change History
comment:3 Changed 2 years ago by michaln
Please provide a VBox.log from a VM showing the problem. It would not hurt to specify what "any" Windows OS is either. Windows 3.1? Windows 95? Windows 10 64-bit?
Changed 2 years ago by gim
- attachment VirtualBox_IE11 - Win7_1_14_03_2018_09_53_38.png
added
comment:4 Changed 2 years ago by gim
I've attached VBox.log and proof screenshot. But I believe that you could not find any usefully info inside VBox.log without enabling R0-logging or at least some VBOX RELEASE LOGGING flags. The problem probably lies somewhere deeply in VMM.
About OSes. We can confirm for Linux/Windows hosts with Windows XP, Windows 7 and Windows 10 guests for sure with latest VirtualBox 5.2.8. For others OSes we can't confirm, but you can check by self, I think it will reproduce.
Changed 21 months ago by gim
- attachment pending_fix.patch
added
comment:9 Changed 21 months ago by gim
Well, we have been able to understand a little essence of the problem. The problem happens only when between "rdtsc vmexit handler" and "single step exception handler" on this instruction one more another interrupt was received (pended). We think this is because BS flag in VMCS clears during processing these last interrupt (probably because of some "27.3.4 Intel 64 and IA-32 Architectures Software Developers Manual Volume 3C: Part3" reasons) and single step exception will occur only on next instruction (so one instruction was skipped).
We’ve added single step interruption event pending and this solve the problem. Accordingly 32.2.1 "Intel 64 and IA-32 Architectures Software Developers Manual Volume 3C: Part3" it is possible to do so.
pending_fix.patch with this code attached
comment:10 Changed 20 months ago by ramshankar
Thanks for the patch and the testcases.
Unfortunately, your patch does not take into account the case where we single-step using EFLAGS.TF using the hypervisor debugger.
Also, the problem was not about an interrupt pending with RDTSC, it was simply RDTSC interception (which is dynamic) combined with single-stepping in the guest. When RDTSC was not intercepted by VirtualBox, it should work 100% of the time. Without Hyper-V/KVM paravirtualization interface in effect, we would be intercepting RDTSCs occasionally and only in those instances would it hit this issue.
I've implemented a more comprehensive fix which also makes your testcase work. The fix will be included in the upcoming release of VirtualBox 6.0.
I will try provide a test build shortly for you early next week.
In the mean time if you'd like the patch (probably only applies cleanly to the very latest VirtualBox OSE trunk and not 5.2.x branch), I've attached it to this ticket (vmx_singlestep_001.diff).
Changed 20 months ago by ramshankar
- attachment vmx_singlestep_001.diff
added
Single-step patch to VirtualBox r127059
comment:11 Changed 20 months ago by michael
- Status changed from new to closed
- Resolution set to fixed
- Summary changed from Wrong instruction after single-step exception with 'rdtsc' to Wrong instruction after single-step exception with 'rdtsc' -> fixed in 6.0
compiled asm code | https://www.virtualbox.org/ticket/17316 | CC-MAIN-2020-34 | refinedweb | 851 | 66.44 |
Sentiment analysis using Keras in Python
Hey folks! In this blog let us learn about “Sentiment analysis using Keras” along with little of NLP. We will learn how to build a sentiment analysis model that can classify a given review into positive or negative or neutral.
To start with, let us import the necessary Python libraries and the data. We can download the amazon review data from
import numpy as np import pandas as pd from keras.models import Sequential from keras.layers import Dense import matplotlib.pyplot as plt import seaborn as sns df = pd.read_csv("C:/Users/username/Downloads/sentiment labelled sentences/amazon_cells_labelled.csv") df.head(2)
Let us see how the data looks like.
Review Sentiment Sentiment1 Unnamed:3 Unnamed:4 Unnamed: 5 0 So there is no way for me to plug it in here i... 0 NaN NaN NaN NaN 1 Good case Excellent value.1 NaN NaN NaN NaN
Here we can observe that the data is irregularly distributed across the columns. Now our motive is to clean the data and separate the reviews and sentiments into two columns. Let us see how to do it!
Data preparation
Now let us combine the various sentiment values that are distributed across the unnamed columns. Let us use the “combine_first” function because it will combine the numbers and leaves the NaN values. Also, let us drop the unnamed columns because the useful data is already transferred to the “Sentiment 1” column.
df['Sentiment1'].combine_first(df['Unnamed: 3']) df['Sentiment1'].combine_first(df['Unnamed: 4']) df['Sentiment1'].combine_first(df['Unnamed: 5']) df = df.drop(columns = ["Unnamed: 3", "Unnamed: 4" ,"Unnamed: 5"])
Now let us concatenate the reviews in other columns to the “Review” column. Later let us put all the sentiment values in “Sentiment1” column. Let us use combine_first() because it leaves the unwanted strings and NaN.
df["Review"] = df['Review'] + df['Sentiment'] df["Sentiment 1"] = df['Sentiment 1'].combine_first(df['Sentiment']) df.head(2)
The output will be like:
Review Sentiment Sentiment 1 0 So there is no way for me to plug it in here i... 0 0 1 Good case Excellent value. Excellent value. 1
Now that we have classified the sentiment labels in “Sentiment 1” column and the corresponding reviews in “Review” column. So let’s drop the remaining unwanted columns.
df.drop(columns = "Sentiment", inplace = True) df.rename(columns={"Sentiment 1": "Sentiment"},inplace = True) df = df.dropna()
There might be some strings in the “Sentiment” column and there might be some numbers in the “Review” column. Let us write two functions to make our data suitable for processing.
Creating bag of words
Let us write the first function to eliminate the strings in the “Sentiment” column.
def Sentiment_process(sent): noalpha = [] char = [] for char in sent: if char!="0" and char!="1": noalpha.append(np.NaN) continue else: noalpha.append(char) continue return(noalpha)
Explanation:
If the character in the review is not a number (either 0 or 1), it is replaced with NaN, so that it will be easy for us to eliminate them. If it is 0 or 1, the number is appended as such.
df["Sentiment"] = Sentiment_process(list(df["Sentiment"])) df = df.dropna()
Now we only have numbers in the “Sentiment” column.
Let us write the second function to eliminate the special characters, stopwords and numbers in the “Review” column and put them into a bag of words. We will eliminate the numbers first, and then we will remove the stopwords like “the”, “a” which won’t affect the sentiment.
import nltk from nltk.corpus import stopwords import string def text_processing(text): nopunc = [] for char in text: if char not in string.punctuation: if char!=str("0") and char!=str("1"): nopunc.append(char) nopunc = ''.join(nopunc) return [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]
Let us call the above function.We will first remove the numbers and then apply the text processing.
df["Review"] = df['Review'].str.replace('\d+', '') df["BagOfWords"] = df["Review"].apply(text_processing)
Now let us see how the data looks like:
df.loc[51:53]
Output:
Review Sentiment BagOfWords 51 good protection and does not make phone too bu... 1 [good, protection, make, phone, bulky] 52 A usable keyboard actually turns a PDA into a ... 1 [usable, keyboard, actually, turns, PDA, realw...
Building the model
Let us define x and y to fit into the model and do the train and test split.
x = df["BagOfWords"] df["Sentiment"] = df["Sentiment"].astype(str).astype(int) y = df["Sentiment"] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=1)
Now let us tokenize the words. That is, we are going to change the words into numbers so that it will be compatible to feed into the model.
We will consider only the top 5000 words after tokenization. Let us convert the X_train values into tokens to convert the words into corresponding indices and store back to X_train. Similarly, we will tokenize X_test values.
from keras.preprocessing.text import Tokenizer from keras.preprocessing.text import text_to_word_sequence tokenizer = Tokenizer(num_words=5000) tokenizer.fit_on_texts(X_train) X_train = tokenizer.texts_to_sequences(X_train) X_test = tokenizer.texts_to_sequences(X_test)
Let us truncate the reviews to make all the reviews to be equal in length. If the reviews are less than the length, it will be padded with empty values. But if the reviews are longer than the desired length, it will be cut short.
from keras.preprocessing import sequence maxlen = 50 # Making the train and test statements to be of size 50 by truncating or padding accordingly X_train = sequence.pad_sequences(X_train, padding='post', maxlen=maxlen) X_test = sequence.pad_sequences(X_test, padding='post', maxlen=maxlen)
Now let us build the keras model.
from tensorflow.python.keras.models import Sequential from tensorflow.python.keras.layers import Dense, Embedding, GlobalAveragePooling1D model = Sequential([Embedding(10000, 17), GlobalAveragePooling1D(), Dense(17,activation = "relu"), Dense(12,activation = "relu"), Dense(1,activation = "sigmoid")]) model.compile( loss = "binary_crossentropy", optimizer = "adam", metrics = ["accuracy"]) model.summary()
Training and evaluation
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=20, verbose = 1)
loss, accuracy = model.evaluate(X_test, y_test) print("Accuracy is : ",accuracy*100)
Output:
Accuracy is : 85.77847814559937
We see that we have achieved a good accuracy.
Now let us test it with a review.
sample = "The product was very good and satisfying." sample = text_processing(sample) sample
Output:
['product', 'good', 'satisfying']
Let us perform all the preprocessing required.
sample = tokenizer.texts_to_sequences(sample) sample simple_list = [] for sublist in sample: for item in sublist: simple_list.append(item) simple_list = [simple_list] sample_review = sequence.pad_sequences(simple_list, padding='post', maxlen=maxlen)
Each and every word in the review will be a separate list and there will be sublists. We have made it into a single simple list so as to predict the sentiment properly.
ans = model.predict(sample_review) ans
Output:
array([[0.8325547]], dtype=float32)
Let us see if this is positive or negative.
if (0.4 <= ans <= 0.6): print("The review is not too good nor too bad") if(ans>0.6): print("The review is positive") elif(ans<0.4): print("The review is negative")
Output:
The review is positive
Hurray! We have predicted the sentiment of any given review. That is all about “Sentiment analysis using Keras”. We have learnt how to properly process the data and feed it into the model to predict the sentiment and get good results.
THANK YOU | https://valueml.com/sentiment-analysis-using-keras/ | CC-MAIN-2021-25 | refinedweb | 1,232 | 52.15 |
Implements parts of:
Depends on D92214
Fixes a GCC shadow warning.
You should mark tests that use format_error with // XFAIL: with_system_cxx_lib=macosx10.9|...|15.
Couldn't we add at least a debug check to verify "Preconditions: end() is reachable from it."?
Why is_constant_evaluated? Does it correspond to the phrase "Remarks: Call expressions where id >= num_args_ are not core constant expressions ([expr.const])."?
Could you explain how it should be understood?
Why not enum class?
Please add a test for the ctor being explicit (using test_convertible for instance).
I think you should guard it on __cpp_char8_t.
Thanks for the review!
I'll add the XFAILs to the next version of the patch.
I had a look at the debug iterators and adding this test seems not trivial. So I think it's not worth the effort to add it here.
Since doesn't specify this exception to be thrown I read this that the test only should be done when std::is_constant_evaluated is true. In that case the function should no longer be a constexpr function. So I use the throw to achieve this. I'm not sure why the exception just shouldn't be thrown unconditionally, I hope to figure that out when I'm a bit further with the implementation.
I'll add some comment in the next revision of the patch.
I didn't add it since the enum is only used internally in the class so I feel the class doesn't add to much benefits and doesn't require it.
If you feel strongly about it I don't mind changing it.
I thought it wouldn't be required, but I'll add _LIBCPP_NO_HAS_CHAR8_T and _LIBCPP_HAS_NO_UNICODE_CHARS guards.
Addresses review comments.
I haven't finished reviewing. Will do soon.
Don't bother. No strong feelings about this :).
I see that it isn't checked that __next_arg_id_ < __num_args.
fmtlib has the following comment ():
// Don't check if the argument id is valid to avoid overhead and because it
// will be checked during formatting anyway.
I don't know what's the libc++'s policy to use assertions, but for me it's a good place to put a _LIBCPP_ASSERT.
Same as above, I'd expect at least an assertion in debug mode (in non-constexpr context).
I don't see why you can't test the first part of test_exception in constexpr context. Changing return type to bool and returning false/true where appropriate should be enough to make it work.
Obviously, the part with test_arg can't be tested there.
As above, but here you can test everything in constexpr context.
FYI, your implementation seems to be exactly what the author meant (std branch is missing in the main fmt repo):
I didn't add a _LIBCPP_ASSERT since it indeed can be tested during formatting. If the index is out of bounds basic_format_args::get will return a default created basic_format_arg () This means the object has a std::monostate as value. In my WIP code I have a formatter for a std::monostate which will throw an exception.
I think it's better to throw an exception to inform the user about errors and I think it's not good to change that behaviour with debug macros. Note for other parts in my WIP code I use _LIBCPP_ASSERT to validate the expected state, but if they trigger it means the library code isn't robust enough.
@ldionne What do you think about the usage of _LIBCPP_ASSERT ?
I'm not entirely sure what you mean. The line context.next_arg_id(); will call __format::__throw_error which is not a constexpr function, turning it into one will fail with the following diagnostic error: constexpr function never produces a constant expression [-Winvalid-constexpr]. Can you explain what change you think should work.
I made the separate function test_exception for the parts which can't be tested as constexpr.
Apart from the question whether adding assertion fits here, this LGTM.
You're completely right. I must have been confused when writing this:).
Please ignore the above too.
Replace a std:: with _VSTD::
Update the synopsis after the changes in D92214.
Ran clang-format.
Rebased.
Fix build breakage due to clang-format. In C++03 mode it breaks string prefixes like u8"foo".
This LGTM, but I would prefer if we stuck to the current convention and used std::__throw_format_error instead. Not a blocking comment.
In D93166#2528368, @ldionne wrote:
This LGTM, but I would prefer if we stuck to the current convention and used std::__throw_format_error instead. Not a blocking comment.
Thanks for the review! My diversion of the convention was up for discussion ;-) So I'll switch back to the convention and use std::__throw_format_error. I'll commit after it passes CI.
Use __throw_format_error instead of __format::__throw__error.
Rebase to trigger CI.
Reverted in 68f66f37d7d7 because of the build break mentioned inline, let me know if you need help to reproduce!
We have a build breakage on bootstrapping clang here:
In file included from /var/lib/buildkite-agent/builds/buildkite-69fdf6c495-wt2bd-1/mlir/mlir-core/libcxx/src/format.cpp:9:
/tmp/ci-nGNyLRM9V3/include/c++/v1/format:153:16: error: no member named 'is_constant_evaluated' in namespace 'std::__1'
if (_VSTD::is_constant_evaluated() && __id >= __num_args_)
~~~~~~~^
1 error generated.
See
Some additional information that might help: looks like std:: is_constant_evaluated is supported since Clang 9:
In D93166#2534776, @antiagainst wrote:
Some additional information that might help: looks like std:: is_constant_evaluated is supported since Clang 9:
Thanks there's already a library macro to test whether it's available _LIBCPP_HAS_NO_BUILTIN_IS_CONSTANT_EVALUATED. But the we don't test with old compilers in C++20 mode.
This should fix building with clang-8 which doesn't support
std::is_constant_evaluated().
@mehdi_amini Can you test whether this fixes the MLIR build?
Use concept support to disable <format> on older compilers. Note this is intended to be a temporary solution until libc++ requires a modern compiler.
Rebased.
I still see an error unfortunately:
In file included from /var/lib/buildkite-agent/builds/buildkite-69fdf6c495-8wxf7-1/mlir/mlir-core/libcxx/src/format.cpp:9:
/tmp/ci-6kGTAaqbDL/include/c++/v1/format:159:16: error: no member named 'is_constant_evaluated' in namespace 'std::__1'
if (_VSTD::is_constant_evaluated() && __id >= __num_args_)
~~~~~~~^
Actually I see that you pushed a follow up fix? Kicked another build here:
Fails differently with the forward fix:
libcxx/src/format.cpp:15:1: error: use of undeclared identifier 'format_error'; did you mean 'domain_error'?
format_error::~format_error() noexcept = default;
^~~~~~~~~~~~
domain_error
/tmp/ci-yIcukneKik/include/c++/v1/stdexcept:122:29: note: 'domain_error' declared here
class _LIBCPP_EXCEPTION_ABI domain_error
^
/var/lib/buildkite-agent/builds/buildkite-69fdf6c495-8wxf7-1/mlir/mlir-core/libcxx/src/format.cpp:15:16: error: expected the class name after '~' to name a destructor
format_error::~format_error() noexcept = default;
^~~~~~~~~~~~
domain_error
And one more build with the most recent fix:
It seems modifying the header and building libc++ doesn't rebuild the library. So it seemed to work for me. After I touched format.cpp I could reproduce the issue. Pushed another fix that works for me locally, but I'll keep an eye on the MLIR build server.
It passed the host phase :)
The MLIR build is now green :-)
Thanks for checking on it! :) | https://reviews.llvm.org/D93166?id=323041 | CC-MAIN-2021-17 | refinedweb | 1,199 | 57.98 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.