text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
A MOSFET is a voltage-driven switch that controls the flow of current in an electronic circuit. The devices are made from a doped semiconductor material. Unlike magnetic power control devices, MOSFETs have a very small form factor and they do not have moving parts. This means that MOSFETs can operate much faster than magnetic switching devices. As shown in Figure 4, the devices have three basic external connections: the source, drain and the gate. The source is connected to the ground, the drain is connected to the load and finally, the MOSFET will be switched ON when a positive voltage is connected to the gate.
Flash Memory Cell
The charge of the floating gate determines the flow of current from the source to the drain. The floating gate can be neutral, positive or negatively charged. If the floating gate is neutral, then the storage transistor will behave like a normal MOSFET. A positive charge on the control gate creates a conducting channel in the p-substrate and current flows from the source to the drain. Lastly, a negative charge on the floating gate prevents the formation of a channel in the p-substrate.
Another important parameter is the threshold voltage. This is the minimum voltage at the control gate which can make the channel conductive.
Operations which can be performed on the flash memory cell include programming the cell and erasing the cell. When we program a Flash memory cell, what we are physically doing is placing electrons into the floating gate. On the other hand, when we remove the charge from the floating gate, we are essentially erasing the memory cell. However, the detailed process of trapping or removing electrons from the floating gate is beyond the scope of this article.
Arduino Flash Memory
Flash memory, also known as program memory, is where the Arduino stores and runs the sketch. Since the flash memory is non-volatile, the Arduino sketch is retrieved when the micro-controller is power cycled. However, once the sketch starts running, the data in the flash memory can no longer be changed. Modification can only be done when the program is copied into SRAM memory.
The table below show the amount of flash memory available on some different Arduino boards:
Arduino EEPROM
In some instances, we may need to store the states of certain input and output devices on the Arduino for long periods. For that, we save the data to EEPROM memory with the help of Arduino libraries or third-party EEPROM libraries. This helps us to remember the information when we power up the Arduino again. Most of the Arduino boards have built-in EEPROM memory, but in some cases, certain programs may require the use of an external EEPROM. The functions below help us to interact with the Arduino EEPROM.
#include <EEPROM.h> EEPROM.write(address, value); EEPROM.read(address); EEPROM.update(address, value); EEPROM.get(address); EEPROM.put(address, value);
To update or write to EEPROM, we need the address to write to and also the value to write or update. The read function accepts the address to read from and returns the value stored at that address. The
get() and
put() functions operate just like the
read() and
write() functions respectively, except that the former allow us to store other data types such as floats, structs or integers.
Control Your DSLR Camera with an Arduino
September 13, 2018 | https://www.circuitbasics.com/types-of-memory-on-the-arduino/ | CC-MAIN-2020-29 | refinedweb | 569 | 62.48 |
Opened 7 years ago
Closed 7 years ago
#2825 closed enhancement (fixed)
Add ability to add extra test suites
Description
I have objects outside 'models.py' that have document strings that I want to use in doctests.
The default TEST_RUNNER has no way of adding them to the tests.
One way to fix this is to have a function in each module's 'tests.py' file that returns a test suite that is added to the other tests.
The test runner has to call this function (if it exists) and add the returned suite to the other tests. This would be done in the 'build_suite' function, right after adding doctests in 'tests.py'.
Example of function:
def test_suite(): import extra_module suite = unittest.TestSuite() suite.addTest(doctest.DocTestSuite(extra_module)) return suite
Attachments (0)
Change History (4)
comment:1 Changed 7 years ago by adrian
- Summary changed from Ability to add extra test suites wanted. to Add ability to add extra test suites
comment:2 Changed 7 years ago by Simon G. <dev@…>
- Triage Stage changed from Unreviewed to Design decision needed
comment:3 Changed 7 years ago by Gary Wilson <gary.wilson@…>
- Cc gary.wilson@… added
comment:4 Changed 7 years ago by russellm
- Resolution set to fixed
- Status changed from new to closed
Changed summary to match our style. | https://code.djangoproject.com/ticket/2825 | CC-MAIN-2014-10 | refinedweb | 219 | 72.46 |
pipeline
Easily transform any stream into a queue of middleware to treat each object in the stream.
Usage
First, let's look at the middleware.
class PrependLineNumber implements Middleware<String> { int _number = 0; Future<String> pipe(String line) async { _number++; return '$_number: $line'; } Future close() async { // Tear down method // Silly example _number = -1; } }
The
Middleware interface contains
Future<T> pipe(T item) and
Future close(). The return value of the pipe
method are sent to the next middleware in the pipeline. The return value can be either a value or a Future. The
pipeline will wait before it sends it through to the next middleware.
If the return value is
null or
Future<null> the item will be dropped from the pipeline and will never reach the
next middleware. This is useful for buffering data up to a specific point and then releasing through to the next
middleware.
This is a middleware that accepts data from a file stream, but only passes forward every line as it is processed.
class ReadLine implements Middleware<int> { String buffer = ''; Future<String> pipe(int unit) async { String character = new String.fromCharCode(unit); // If the character isn't a newline, remove this item from the pipeline if (character != '\n') { buffer += character; return null; } String line = buffer; buffer = ''; return line; } Future close() async {} }
Pipeline
To actually use these middleware, we need a stream of char codes. In this case, we fake it a bit to prove a point.
Anyway, we can either create a
Pipeline object with the char stream, or we can pipe the stream to a pipeline object.
The pipeline itself is a stream, so we can return the pipeline and allow other parts of the program to listen to it.
Future<Pipeline<String>> everyLineNumbered(File file) async { Stream<List<int>> stream = new Stream.fromIterable(await file.readAsBytes()); Pipeline<String> pipeline = new Pipeline(middleware: [ new ReadLine(), new PrependLineNumber(), ]); stream.pipe(pipeline); return pipeline; }
In this case, it might be nice to refactor into the
Pipeline.fromStream constructor, like so:
Pipeline<String> everyLineNumbered(File file) async => new Pipeline.fromStream( new Stream.fromIterable(await file.readAsBytes()), middleware: [ new ReadLine(), new PrependLine(), ] );
Use with HttpServer
A good use case for the pipeline is when you're setting up an
HttpServer. That could look something like this:
import 'dart:io'; import 'package:pipeline/pipeline.dart'; main() async { HttpServer server = await HttpServer.bind('localhost', 1337); Pipeline<HttpRequest> pipeline = new Pipeline.fromStream(server, middleware: [ new CsrfVerifier(), // Middleware<HttpRequest> that protects against CSRF by comparing some tokens. new HttpHandler(), // A handler that writes to the response object ]); await for(HttpRequest request in pipeline) { // Every response should be closed in the end request.response.close(); } }
TODO
- Write tests | https://www.dartdocs.org/documentation/pipeline/1.0.4/index.html | CC-MAIN-2017-26 | refinedweb | 446 | 56.45 |
by Justice Mba
How to test a Socket.io-client app using Jest and the react-testing-library
Testing the quality of real-time Socket.io-client integration seems to have sunk into oblivion, maybe because the UIs had a long history of testability issues. Let’s fix this!
Quickly google “testing socket.io app”.
The first two result pages (just don’t bother opening the rest of the pages) are all examples and tutorials focusing on testing the server-side socket.io integration. No one is talking about the quality of socket.io-client integration on the front-end, how the User Interface will look when it receives certain events, and if the front-end code is actually emitting the right events.
But why? Does this just mean that people don’t really care about the quality of their real-time apps on the front-end — the meat of the software? I don’t think so. My guess is: Testing UIs was just too hard!
User Interfaces have had a long history of testability issues. UIs are never stable. The testing tools we have had available to us easily lead to writing very brittle UI tests. Thus, people tend to focus their time and energy on testing their socket.io apps only on the server-side.
But that doesn’t feel right. It is only the UI that makes our user confident that they’re actually accomplishing the purpose of using our app. But then, unto us a UI testing tool has been born!
react-testing-library
It was a few months ago that my friend and mentor Kent C. Dodds released this beautiful tool for testing react apps. Ever since then, I no longer just love the idea of testing UIs, but actually love testing them. I have literally dug out and tested all the UI code I gave up on testing because of its complexity :).
In my experience-based opinion, the react-testing-library is the panacea for all UI test issues. It is not just a testing tool, it is a testing approach.
Note: If your’re not a React person, there is vue-testing-library, ng-testing-library and others, all built on top of the dom-testing-library.
The best feature of the react-testing-library is probably its support of UI TDD. According to the docs, it’s primary guiding principle is:
The more your tests resemble the way your software is used, the more confidence they can give you.
This is the “approach” I’m talking about. Test your UIs just as your non-techie friend would. Your user probably neither knows nor cares what your code looks like. And nor should your test. That gives us the power to use TDD on our UIs.
This is how we’re going to write our socket.io-client test — test everything without thinking about the code. Now let’s do it!
Testing out the Telegram app
From our very talented Telegram UI designer, bellow are the designs of the Telegram app we’ll be testing.
Looking at the design, I see a couple of real-time features our user would want to make sure the app performs, otherwise they’ll close the tab. Here are some of them:
- App should get messages
- App should tell when/if a message is sent or not
- App should tell when/if a message is delivered or not
- App should tell when a friend comes online/goes offline
- App should tell when a friend is typing
Okay, the list goes on…but let’s work on these first.
Receiving messages
Let’s look at how a user would know if they received a message as an example. First, create a test file, then import the chat.js file and its mocked dependencies. If you’re new to mocking or stuff like that, then Kent C. Dodds should really be your friend. He’s got everything covered on JavaScript testing, so just follow him on here, Twitter, and everywhere else.
Now as I was writing this line, I was thinking he should just write a book on JS testing so I tweeted:
And hopefully, he will eventually :)
Back to our test file:
// chat.test.jsimport React from 'react';import io from 'socket.io-client';
import Chat from './chat';
Because, we’re only doing integration testing here, we don’t really want to emit socket.io events to the server. So we need to mock out socket.io-client. For more information on mocking, see Kent’s article “But really, what is a JavaScript mock?” as well as this section from the Jest docs on Jest’s Mock Functions.
Once you understand how to mock, the next thing is understanding what your module is doing, and then fake the implementation.
// socket.io-client.js
let EVENTS = {};
function emit(event, ...args) { EVENTS[event].forEach(func => func(...args));}
const socket = { on(event, func) { if (EVENTS[event]) { return EVENTS[event].push(func); } EVENTS[event] = [func]; }, emit};
export const io = { connect() { return socket; }};
// Additional helpers, not included in the real socket.io-client,just for out test.
// to emulate server emit.export const serverSocket = { emit }; // cleanup helperexport function cleanup() { EVENTS = {}}
export default io;
With that, we have a good-enough socket.io-client mock for our test. Let’s use it.
// chat.test.jsimport React from 'react';import mockio, {serverSocket, cleanUp } from 'socket.io-client';
import Chat from './chat';
Now let’s write our first test. The traditional TDD approach says we’ll write a test for a feature, see it fail, then go implement the feature to satisfy our test. For brevity, we’re not going to do exactly that, as this article focuses on testing.
Following the react-testing-library approach, the first thing you do before you write any test is to ask yourself: “How will a user test this feature?” For the first test in our list above, you ask yourself, “how will a user know that they’re getting the messages their friend is sending?”. To test it, they’ll probably tell the person next to them to send them a message.
Usually, how that will work is that the user’s friend sends a message to the server, with the user’s address, then the server emits the message to the user. Now, since we’re not testing if the user can send a message at this time, but whether the user can receive a message, let’s have
socket.io server directly send the user a message.
// chat.test.jsimport React from 'react';import mock-io, {serverSocket, cleanUp } from 'socket.io-client';import {render} from 'react-testing-library';
import Chat from './chat';
test('App should get messages', () => { // first render the app const utils = render(<Chat />) // then send a message serverSocket.emit('message', 'Hey Wizy!');})
Above we imported the
render method from the react-testing-library, which is just a wrapper around
ReactDom.render. In our text, we use it to render our Chat app. The render method returns a test utility object that contains query methods we can use to query the
container of our app — the DOM node
render rendered our app into — for DOM nodes our test is interested in. Next in the text, use our mock socket.io server to send a message to the user.
Now that we’ve sent a message to the user, think again: how will the user know they’ve gotten the message? From the design above, they’ll definitely have to look at the screen to see the message appear. So to test that, we have to query the container of our app to see if it has any node that contains the message we sent, ‘Hey Wizy!’ To do that, the utility object returned from
render has a query method called
getByText, so we could simply do:
expect(utils.getByText('Hey Wizy!')).toBeTruthy();
While that might work, unfortunately, we can’t do that. Here’s why: All query methods returned from
render will search the entire container for the specified query. That means that
getByText, as used above, will search the entire container for the text ‘Hey Wizy!’, then returns the first node that has that text.
But that’s not how our user will look for the text. Instead, our user will only look within the ‘messages-section’, the section that contains all the messages. Only if messages appear in that section will they know they’ve got a message. So to make sure our test resembles how the user is using our app, we’ll need to search for the text ‘Hey Wizy!’ only within the messages-section, just as the user would do.
For that, the react-testing-library provides us with a unique query method call,
within, which helps us focus our query within a particular section of the rendered document. Let’s use it!
Note:
within is a new API that was inspired by this article, so make sure you have the very latest version of the react-testing-library.
// chat.test.jsimport React from 'react';import mock-io, {serverSocket, cleanUp } from 'socket.io-client';import {render, within} from 'react-testing-library';
import Chat from './chat';
test('App should get messages', () => { // first render the app const utils = render(<Chat />) // then send a message serverSocket.emit('message', 'Hey Wizy!'); // the message must appear in the message-section const messageSection = utils.getByTestId('message-section'); // check withing messageSection to find the received message const message = within(messageSection).getByText('Hey Wizy!');})
First, we grabbed the message section with a query method
getByTestId. To use
getByTestId in your test, you have to hard-code it in the DOM. Like so:
<div data-testid=”message-section” />
Because
getByTestId does not closely resemble how users locate sections of your app, you should use it only on spacial cases and only when you’re certain there is no better alternative.
Still, our test is not relying on the DOM structure. Even if someone changes the
div to a
section tag or wraps it 10 levels deep in the DOM, our test doesn’t just care about the code — it just cares about the test-id.
Lastly, we use the
within method as described earlier to get the received message. If the text is not found,
getByText will throw and fail our test.
And that’s how we assert that the App can get messages.
Writing more tests
Let’s see some more query methods that the react-test-library gives us. We’ll see how we can further combine the APIs we’ve already learned to perform more complex queries without relying on the UI code.
So now, let’s write the second test: the App should tell the user when/if a message has been sent or not. Also, I think this test is basically doing the same thing as the next one in the list, so let’s merge both into one example.
Again, the first question we ask is…? I know you got it: “how will our user test this feature?” Okay, how you phrase your question might be different, but you get the idea :). So to test the sending message feature, the steps will look like this:
- The user locates the input to enter their message. Then they enter their message. Finally, they click the send button.
- The message should appear on the message-section
- The server will tell if the message got to the server, which means sent
- The UI should mark the message as sent
- The server then tells when the message is delivered
- The UI should, in turn, update the message as delivered
How does the user locate the input to enter their message? From the UI design we’re working with, they’ve gotta look and find the input with the placeholder ‘message’. (Well, that’s actually the only input on the screen, but even if there are more, the user will identify the input to enter their message by the placeholder or label.)
The react-testing-library has us covered again with a query method called
getByPlaceholderText
// and delivered', () => { // first render the app const utils= renderIntoDocument(<Chat />) // enter and send a message utils.getByPlaceholderText('message').value = 'Hello'; utils.getByTestId('send-btn').click()})
So we introduced a couple of new APIs here. The first one is the
renderIntoDocument method. We should fire real DOM events, not simulate them, in our test, as that more closely resembles how users use our app.
The drawback is that the
render method creates and renders our app to an arbitrary DOM node, called
container, on the fly. But React handles events via event delegation — attaching a single event for all event types on the
document, and then delegating the event to the appropriate DOM node that triggered the event.
So, to fire real DOM events, we need to actually render our app into
document.body. That’s what
renderIntoDocument does for us.
Because we render into the document, we want to always make sure that the document is cleaned up after each test. You guessed right, the cleanup helper function does that for us.
In the test, after we enter the value, we click the send button to send our message. If you noticed, looking at the design, there is no send button. But if you pull out your Telegram or WhatsApp right now, you’ll notice that the send button only appears when you’ve actually entered some text in the message input. Our test has just accidentally covered that feature. :)
Now that we’ve clicked the send button, let’s make some assertions.
///delivered', () => { // first render the app const utils = renderIntoDocument(<Chat />) // enter and send a message utils.getByPlaceholderText('message').value = 'Hello'; utils.getByTestId('send-btn').click(); // the message should appear on the message section const messageSection = uitils.getByTestId('message-section'); expect(within(messageSection).getByText('Hello')).toBeTruthy(); // server tells us message is sent serverSocket.emit('message-sent');
// Now the UI should mark the message as sent const message = within(messageSection).getByText('Hello'); expect(within(message).getByTestId('sentIcon')).toBeTruthy();
// server tells us it's delivered serverSocket.emit('message-delivered');
// UI should mark the message as delivered expect(within(message).getByTestId('deliveredIcon')).toBeTruthy();})
And that’s it. Just exactly as the user would expect, our test expects to see the sent/delivered icon appear next to the message when it’s sent/delivered.
So far, we’ve seen how easy testing a real-time socket.io-client app can be with the react-testing-library. No matter what you are testing, when you follow this approach, you gain more confidence that your app is working as it should. And what is more, we still have zero idea what the implementation of the app will look like. Just as the user, our test just doesn’t care about the implementation!
Finishing up
Lastly, I’ll leave it to you to think about how to write the last two remaining tests on our list:
- App should tell when a friend comes online/goes offline
- App should tell when a friend is typing
Tip: You should have the server socket.io emit the event, then you assert what the UI will look like. Think about how exactly the user will know when a friend is typing, online, offline.
If you feel like I’ve done a nice job, and that others deserve a chance to see this, kindly applaud this article to help spread a better approach of testing real-time socket.io-client apps.
If you have a question that hasn’t been answered or feel differently about some of the points here, feel free to drop in some comments here or via Twitter.
You might also want to follow me here and/or on Twitter for more awesome articles coming up. And you might like to check out my previous articles: | https://www.freecodecamp.org/news/testing-socket-io-client-app-using-jest-and-react-testing-library-9cae93c070a3/ | CC-MAIN-2019-43 | refinedweb | 2,645 | 65.01 |
Binary search trees
Goals
- Learn how write code using binary search trees (a dynamic structure)
- Practice writing code that uses recursion
Overview
In this assignment, you will create a program to index files by the words that they contain, and then print all of the files containing a certain word. We provide all of the code you need for opening and reading the files.
Your job is to create a binary search tree (BST) of strings, where each node contains a word, and a linked list of the filenames it appeared in (and of course the left and right node addresses).
The basic structure is below. For example, the word “write” appears in the files a.txt and c.txt.
To be clear, you will not be writing any code to access files yourself. That is simply the premise of this assignment. We do provide some test code (indexer.c) that does access files, but you don't have to modify it in any way. indexer.c is described below.
Getting started
To get the starter files, type this:
264get hw11
Warm-up
This assignment includes a warm-up exercise to help you get ready. This accounts for 15% of your score for HW11. Scoring will be relatively light, but the usual base requirements apply.
- String compare function
Write a program that takes two words on the command line and prints either "<", "=", or ">". For example, entering
./warmup wusc ham sandwichon the command line would print "<".
- BST of integers
Write a program to do the following:
- create a binary search tree of integers: 9 7 0 3 1 6 6 4
- print all of the nodes in order, one number per line
- search the BST for the following numbers: 1 2 3 4 5. Print only the numbers that are found, one per line.
- free the BST
- BST of strings
Write a program to do the following:
- create a binary search tree of words: jam ham egg oat nut tea
- print all of the nodes in lexigraphic order (i.e., same as
strcmp(…)), one word per line
- search the BST for the following words: jam bun pie tea. Print only the strings that are found, one per line.
- free the BST
The structure of the warmup.c file is described in the Requirements table below. A starter warmup.c is provided, including suggestions for helper functions you may want to use, in addition to the required functions. You do not have to use those, and you may add11 (15%).
Test code: indexer.c
Included in the starter files is indexer.c, a program that you can use to test your
code. It reads one or more files in the current directory and uses your
create_index() and
put(…)
to create an index of the words in each of the files. Then, it searches for a given word
and tells you which files contain it.
The usage is as follows:
./indexer WORD FILE1 FILE2 …
For example, in the current homework directory, if you compile indexer.c with
your code (
gcc index.c indexer.c -o indexer) and
then run
./indexer stdio index.c indexer.c warmup.c it should print the
following (or something similar):
3 files contain "stdio" - index.c - indexer.c - warmup.c
Requirements
Unlike previous assignments, you will declare all struct types using
typedefso that you don't need to include “
struct” before the name throughout your code. Your reference sheet contains examples (labelled as “Concise syntax” or “PREFERRED”), which you may use.
- Your submission must contain each of the following files, as specified:
- Only the following externally defined functions and constants are allowed in your .c files. (You may put the corresponding #include <…> statements in the .c file or in your index.h, at your option.)
- All data referenced by your
Index(directly or indirectly) should be on the heap. Your
Indexitself may be on the stack.
- Make no assumptions about the number of distinct words or filenames.
IndexBSTNodeand
StringListNodemay not have any additional fields, other than those shown above.
get(…)may not have any side effects. In other words, it may not modify the state of the
Indexthat was passed to it.
- Submissions must meet the code quality standards and the policies on homework and academic integrity.
As usual… Type declarations into your code manually. Do not copy-paste. (You learn more this way.)
Bonus #1: No redundant allocations for filenames (2 points)
Do everything above, but allocate space for each filename only once.
To indicate that you are attempting bonus #1, you must include the following (exactly) at the top of your index.h file.
#define HW11_BONUS_1
Bonus #2: Handle OR queries in your
get(…) (2 points)
Do everything above, but your
get(…) is
a variadic function that can take multiple words. The result is
the array of all filenames containing those words, with no duplicate filenames.
For bonus #2, you will use the following function signature
char** get(Index* index, int* count, char* word, ...)
The last argument must be NULL, just like in HW10.
To indicate that you are attempting bonus #2, you must include the following (exactly) at the top of your index.h file.
#define HW11_BONUS_2
For bonus #2, you may
#include <stdarg.h> and use
va_arg(…),
va_copy,
va_start(…),
va_end(…), and
va_list.
How much work is this?
Those who finished the linked list assignments (HW08, HW09, HW10) and understood them well will most likely find HW11 easier. As always, your mileage may vary. In particular, programming with dynamic memory usually entails some time debugging. If dynamic structures still feel difficult to you, don't worry. It usually takes time to get comfortable with it.
Aim to start and finish this assignment early. Then, you can do one or both of the bonuses, if you like.
Q&A
- Will
put(…)be called for every word in every file?
Yes.
- So what is the
Index, really? … and what does
create_index()do?
In the simplest case, it can be a struct with one field, the root of a BST.
Here is the simplest possible Index.
typedef struct { IndexBSTNode* root; } Index;Here is the simplest possible
create_index(…).
Index create_index() { Index index = { .root = NULL }; return index; }You may use/copy these, if you like.
- How does
strcmp(…)work?
From bash, type
man strcmpfor details. Type q to get out.
- Can we use helper functions?
Yes. Just don't include them in your index.h. That is a general principle. Your .h file is like a guide for others who might want to use your code in their program. Since helper functions are, by definition, only for internal use within your code, they should not be declared in your .h file. Instead, you can (and probably should) declare them at the top of your index.c file.
- What if the caller calls
put(…)with the same word and filename twice?
From the requirements table: “Do not add the same filename multiple times to the same word's node.” | https://engineering.purdue.edu/ece264/16au/hw/HW11 | CC-MAIN-2019-09 | refinedweb | 1,158 | 75.81 |
The Zip file contains two projects. One is VB.Net and the other is C#. Each project is an example of the same use of a delegate. A base class is derived and the delegate calls a method on several classes derived from the base class. Several things are shown from this example. Using inherited base types to strinct type checking of a base type and calling class level methods from a single delegate.Here is sample code from Delegates.cs file:namespace DelegatesCS { using System; /// <summary> /// Author TVanover@Quilogy.Com /// Date 02/02/2001 /// Purpose Example of Delegate usage /// </summary> public class Wisdom //class containing the Delegate { public delegate string GiveAdvice(); public string OfferAdvice(GiveAdvice Words) { return Words(); } } public class Parent //base class { public virtual string Advice() { return("Listen to reason"); } ~Parent() {} } public class Dad: Parent //derive from the parent { public Dad() {} public override string Advice() { return("Listen to your Mom"); } ~Dad() {} } public class Mom: Parent //derive from the parent { public Mom() {} public override string Advice() { return("Listen to your Dad"); } ~Mom() {} } public class Daughter //don't derive from the parent { public Daughter() {} public string Advice() { return("I know all there is to life"); } ~Daughter() {} } public class Test { public static string CallAdvice(Parent p)//use the base type of derived class { Wisdom parents = new Wisdom(); Wisdom.GiveAdvice TeenageGirls = new Wisdom.GiveAdvice(p.Advice); return(parents.OfferAdvice(TeenageGirls)); } public static void Main() { Dad d = new Dad(); Mom m = new Mom(); Daughter g = new Daughter(); //these both derived from the base class Console.WriteLine(CallAdvice(d)); Console.WriteLine(CallAdvice(m)); //cannot do this as it did not derive from the base //Console.WriteLine(CallAdvice(g)); } } }
©2016
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/UploadFile/tavanover/ImplementingDelegatesinCSharp12032005010705AM/ImplementingDelegatesinCSharp.aspx | CC-MAIN-2016-18 | refinedweb | 284 | 54.42 |
>
Other
Im playing a sound for health pickups and from the moment they appear the pick up sound starts, the audio source doesn't have "loop" option ticked so what else could it be?
Please, edit your question, and provide the code you currently have, 99% change you have a problem with your scripts....
Do their AudieSource components have Play On Awake checked?
Play On Awake
Answer by DragonFang17
·
Oct 18, 2018 at 05:21 PM
Is it supposed to play before or after the player picks it up?
its supposed to play as the player picks it up.
Think Sonic picking up the rings as an example.
But in my case the rings make the noise from the moment they appear in a loop
You should really provide your code, how are we supposed to help without it?
I didn't really need help with code, Ive got that. I'm just asking for suggestions as to why the sound would go on a loop. My guess is its a setting issue but i dont know what
but if you need it ...
public class HealthupNoise : MonoBehaviour {
public AudioSource HealthAudio; //addition
// Use this for initialization
void Start ()
{
HealthAudio = GetComponent<AudioSource> (); // addition
}
// Update is called once per frame
void OnCollisionEnter (Collision col){
if (col.gameObject.tag == "HealthPickup") {
HealthAudio.Play (); // addition
Debug.Log ("Bing!"); // addition
//Destroy (col.gameObject);
//NB: Audio is looping and "loop" option is not ticked
}
}
}
Answer by Lost_Syndicate
·
Oct 23, 2018 at 03:17 PM
So, your code is called every frame, So its going to loop and sound hideous. There are 2 ways of solving this.
public AudioSource audioSource;
private bool ticked;
private void Start()
{
audioSource = GetComponent<AudioSource>();
// Play it in here
audioSource.Play();
}
private void Update()
{
// OR, make a bool (as you can see "tick")
if (!ticked)
{
audioSource.Play();
ticked = true;
}
}
As you can see, those are the 2 ways i would prevent it from looping. first one i highly recommend if you want to play it on start, or even just tick play on awake, on the audio source and that should fix it. Other method is just using a bool to act as a toggle. Hope this helped you.
I was aiming for on trigger enter function, but ill try your suggestion
thank you
it's only working when it appears not when i move onto it.
based on these two replies, it sounds like it's a health pickup. so use the OnCollision/OnTrigger in conjunction with AudioSource.PlayClipAtPoint. The playclipatpoint will create a one-shot audio source which will prevent the audio from cutting out on the pickup being destroyed, and will destroy itself when the audioclip finishes.
well right now ive just got the code down as
void OnCollisionEnter (Collision col)
{
if (col.gameObject.tag == "HealthPickup")
{
Debug.Log ("Bing!"); // addition
}
}
}
And Bing isn't appearing on the console. The pick is tagged as HealthPickup, the box collider is set to trigger and the script is attached to the pickup so Ive no clue as to why it's not working.
Any.
Audio: -3db automatic attenuation on any audio playing?
0
Answers
Audio cuts out once a certain number of sounds are played
0
Answers
AudioSource won't play
2
Answers
How to get decibel's value of game's sound?
0
Answers
Play sound from an array, not random, and only once
3
Answers | https://answers.unity.com/questions/1563810/what-could-make-the-audio-loop.html | CC-MAIN-2019-26 | refinedweb | 562 | 71.95 |
Created on 2008-03-23 14:41 by weijie90, last changed 2008-12-02 20:47 by jjlee. This issue is now closed.
Try the following code:
import urllib2
gmail = urllib2.urlopen("").read()
wikispaces = urllib2.urlopen("").read()
Getting the html over HTTPS from gmail.com works, but not over HTTP from
wikispaces. Here's the traceback:
>>> wikispaces = urllib2.urlopen("").read()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.5/urllib2.py", line 121, in urlopen
return _opener.open(url, data) (104, 'Connection reset by peer')>
Note the two 302 redirects.
I tried accessing wikispaces.com with SSL turned off in Firefox
2.0.0.12, which didn't work because SSL was required, perhaps in between
the redirects that wikispaces uses.
Why doesn't urllib2 handle the "hidden" SSL properly? (Not to be rude,
but httplib2 works.)
Thanks!
WJ
The problem does not appear to have anything to do with SSL. The
problem is that the chain of HTTP requests goes:
GET -> 302 -> 302 -> 301
On the final 301 urllib2's internal state is messed up such that by the
time its in the handle_error_302 method it believes the Location header
contains the following:
'\x00/?responseToken=481aec3249f429316459e01c00b7e522'
The \x00 and everything after it should not be there and is not there if
you look at what is sent over the socket itself. The ?responseToken=xxx
value is leftover from the previous 302 response. No idea where the
\x00 came from yet. I'm debugging...
Please take your time, because this bug isn't critical. Thanks!.
patch to implement either behavior of dealing with nulls where they
shouldn't be:
Index: Lib/httplib.py
===================================================================
--- Lib/httplib.py (revision 62033)
+++ Lib/httplib.py (working copy)
@@ -291,9 +291,18 @@
break
headerseen = self.isheader(line)
if headerseen:
+ # Some bad web servers reply with headers with a \x00 null
+ # embedded in the value. Other http clients deal with
+ # this by treating it as a value terminator, ignoring the
+ # rest so we will too..
+ if '\x00' in line:
+ line = line[:line.find('\x00')]
+ # if you want to just remove nulls instead use this:
+ #line = line.replace('\x00', '')
# It's a legal header line, save it.
hlist.append(line)
- self.addheader(headerseen,
line[len(headerseen)+1:].strip())
+ value = line[len(headerseen)+1:].strip()
+ self.addheader(headerseen, value)
continue
else:
# It's not a header line; throw it back and stop here.
The issue is not just with null character. If you observe now the
diretion is 302-302-200 and there is no null character.
However, still urllib2 is unable to handle multiple redirection properly
(IIRC, there is a portion of code to handle multiple redirection and
exit on infinite loop)
>>>>> opened = urllib.urlopen(url)
>>> print opened.geturl()
>>> print opened.read()
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/0.6.30</center>
</body>
</html>
Needs a relook, IMO.
Senthil:
Look at that URL that the server returned in the second redirect:
See that the "?" appears without a path between the host and it.
Check the item 3.2.2 in the RFC 2616, it says that a HTTP URL should be:
http_URL = "http:" "//" host [ ":" port ] [ abs_path [ "?" query ]]
So, we should fix that URL that the server returned. Guess what: if we
put a "/" (as obligates the RFC), everything works ok.
The patch I attach here does that. All tests pass ok.
What do you think?
looks good to me.
Ah, I that was a simple fix. :) I very much overlooked the problem after
being so much given the hints at the web-sig.
I have some comments on the patch, Facundo.
1) I don't think is a good idea to include that portion in the
http_error_302 method. That makes the fix "very" specific to "this"
issue only.
Another point is, fixing broken url's should not be under urllib2,
urlparse would be a better place.
So, I came up with the approach wherein urllib2 does unparse(parse) of
the url and parse methods will fix the url if it is broken. ( See
attached issue2464-PATCH1.diff)
But if we handle it in the urlparse methods, then we are much
susceptible to breaking RFC conformance, breaking a lot of tests, Which
is not a good idea.
So,I introduced fix_broken() method in urlparse and called it to solve
the issue, using the same logic as yours (issue2464-py26-FINAL.diff)
With fix_broken() method in urlparse, we will have better control
whenever we want to implement a behavior which is RFC non-confirming but
implemented widely by browsers and clients.
All tests pass with issue2464-py26-FINAL.diff
Patch for py3k, but please test this before applying.
Senthil: I don't like that.
Creating a public method called "fix_broken", introducing new behaviours
now in beta, and actually not fixing the url in any broken possibility
(just the path if it's not there), it's way too much for this fix.
I commited the change I proposed. Maybe in the future will have a
"generic url fixing" function, now is not the moment.
i was pondering if it should go in urlparse instead. if it did, i think
it should be part of urlparse.urlunparse to ensure that there is always
a trailing slash after the host:port regardless of what the inputs are.
anyways, agreed, this fixes this specific bug. should it be backported
to release25-maint?
Maybe we can put it in urlunparse... do you all agree with this test cases?
def test_alwayspath(self):
u = urlparse.urlparse(";params?query#fragment")
self.assertEqual(urlparse.urlunparse(u),
";params?query#fragment")
u = urlparse.urlparse("")
self.assertEqual(urlparse.urlunparse(u),
"")
u = urlparse.urlparse("")
self.assertEqual(urlparse.urlunparse(u), "")
Maybe we could backport this more general fix...
That test case looks good to me for 2.6 and 3.0. Also add a note to the
documentation with a versionchanged 2.6 about urlunparse always ensuring
there is a / between the netloc and the rest of the url.
I would not back port the more general urlunparse behavior change to 2.5.
Gregory... I tried to fill the path in urlunparse, and other functions
that use this started to fail.
As we're so close to final releases, I'll leave this as it's right now,
that actually fixed the bug...
That was reason in making fix_broken in the urlparse in my patch, Facundo.
I had thought, it should be handled in urlparse module and if we make
changes in the urlparse.urlunparse/urlparse.urlparse, then we are
stepping into area which will break a lot of tests.
I am kind of +0 with the current fix in urllib2. Should we think/plan
for something in urlparse, akin to fix_broken?
This fix was applied in the wrong place.
URI path components, and HTTP URI path components in particular, *can*
be empty. See RFC 3986. So the comment in the code that was inserted
with the fix for this bug that says "possibly malformed" is incorrect,
and should instead just refer to section 3.2.2 of RFC 2616. Also,
because 3.2.2 says "If the abs_path is not present in the URL, it MUST
be given as "/" when used as a Request-URI for a resource (section
5.1.2)", it seems clear that this transformation (add the slash if
there's no path component) should always happen when retrieving URIs,
regardless of where the URI came from -- not only for redirects.
Note that RFC 2616 incorrectly claims to refer to the definition of
abs_path from RFC 2396. The fact that it's incorrect is made clear in
2616 itself, in section 3.2.3, when it says that abs_path can be empty.
In any case, RFC 2396 is obsoleted by RFC 3986, which is clear on this
issue, and reflects actual usage of URIs. URIs like
and have been in widespread use for a long
time, and typing the latter URL into firefox (3.0.2) confirms that
what's actually sent is "/?spam", whereas urllib2 still sends "?spam".
No test was added with this fix, which makes it unnecessarily hard to
work out what exactly the fix was supposed to fix. For the record, this
is the sequence of redirections before the fix was applied (showing base
URI + redirect URI reference --> redirect URI):
'' +
'' -->
''
'' +
'' -->
''
and after the fix was applied:
'' +
'' -->
''
'' +
'' -->
''
'' +
'' --> ''
I've raised #4493 about the issue I raised in my previous comment. | http://bugs.python.org/issue2464 | CC-MAIN-2015-18 | refinedweb | 1,417 | 76.22 |
Node to Node communication without gatway
Hello,
I have two sensor nodes and I want to send data from one node to the other.
But I want to use them in an environment where is no gateway available.
I got the communication to work with the gateway.
So I can send data from the first node to the second (which is configured as a repeater).
But if I disconnect the gatway nothing works.
How do I have to configure the two nodes so that they can talk directly with each other without a gateway?
node one (sender):
#define MY_NODE_ID 150
#define CHILD_ID 0 // Id of the sensor child
MyMessage msg(CHILD_ID, V_TEXT);
void presentation()
{
sendSketchInfo("Liedanzeige_Sender", "1.0");
present(CHILD_ID, S_INFO,"Lied",true);
}
send Data with : "send(msg.setSensor(CHILD_ID).setDestination(151).set("Test"),true);"
node two (receiver):
#define MY_REPEATER_FEATURE
#define MY_NODE_ID 151
#include <MySensors.h>
#define CHILD_ID 0 // Id of the sensor child
MyMessage msg(CHILD_ID, V_TEXT);
void presentation()
{
sendSketchInfo("Liedanzeige_Empfang", "1.0");
present(CHILD_ID, S_INFO, "Empfänger", true);
}
@btmerz If the controller is not present, the nodes won't enter loop(). To avoid this, use (3000 may be changed)
#define MY_TRANSPORT_WAIT_READY_MS 3000
Hi, meanwhile I was able to solve the problem.
#define MY_PASSIVE_NODE did the trick. | https://forum.mysensors.org/topic/9333/node-to-node-communication-without-gatway | CC-MAIN-2018-26 | refinedweb | 207 | 65.32 |
Quoting Stephen Smalley (sds@tycho.nsa.gov):> > On Wed, 2008-03-12 at 09:18 -0400, Stephen Smalley wrote:> > On Wed, 2008-03-12 at 08:09 -0500, Serge E. Hallyn wrote:> > > Quoting Pavel Emelyanov (xemul@openvz.org):> > > > Greg KH wrote:> > > > > On Tue, Mar 11, 2008 at 12:57:55PM +0300, Pavel Emelyanov wrote:> > > > >> Besides, I've measured some things - the lat_syscall test for open from > > > > >> lmbench test suite and the nptl perf test. Here are the results:> > > > >>> > > > >> sec nosec> > > > >> open 3.0980s 3.0709s> > > > >> nptl 2.7746s 2.7710s> > > > >>> > > > >> So we have 0.88% loss in open and ~0.15% with nptl. I know, this is not that> > > > >> much, but it is noticeable. Besides, this is only two tests, digging deeper> > > > >> may reveal more.> > > > > > > > > > I think that is in the noise of sampling if you run that test many more> > > > > times.> > > > > > > > These numbers are average values of 20 runs of each test. I didn't> > > > provide the measurement accuracy, but the abs(open.sec - open.nosec)> > > > is greater than it.> > > > > > > > >> Let alone the fact that simply turning the CONFIG_SECURITY to 'y' puts +8Kb > > > > >> to the vmlinux...> > > > >>> > > > >> I think, I finally agree with you and Al Viro, that the kobj mapper is > > > > >> not the right place to put the filtering, but taking the above numbers > > > > >> into account, can we put the "hooks" into the #else /* CONFIG_SECURITY */> > > > >> versions of security_inode_permission/security_file_permission/etc?> > > > > > > > > > Ask the security module interface maintainers about this, not me :)> > > > > > > > OK :) Thanks for your time, Greg.> > > > > > > > So, Serge, since you already have a LSM-based version, maybe you can> > > > change it with the proposed "fix" and send it to LSM maintainers for> > > > review?> > > > > > To take the point of view of someone who neither wants containers nor> > > LSM but just a fast box,> > > > > > you're asking me to introduce LSM hooks for the !SECURITY case? :)> > > > > > I can give it a shot, but I expect some complaints. Now at least the> > > _mknod hook shouldn't be a hotpath, and I suppose I can add yet> > > an #ifdef inside the !SECURITY version of security_inode_permission().> > > I still expect some complaints though. I'll send something soon.> > > > Not sure I'm following the plot here, but please don't do anything that> > will prohibit the use of containers/namespaces with security modules> > like SELinux/Smack. Yes, that's a legitimate use case, and there will> > be people who will want to do that - they serve different but> > complementary purposes (containers are _not_ a substitute for MAC). We> > don't want them to be exclusive of one another.> > Also, note that ) is already> called out on our kernel todo list for SELinux, and contributions there> would be welcome. I'll take a look at the todo list (James' I assume). The dev_cgroup lsmwill have to come first, I'll see about doing the SELinux version aswell.thanks,-serge | https://lkml.org/lkml/2008/3/12/173 | CC-MAIN-2017-43 | refinedweb | 473 | 74.19 |
peer network device is created with one side assigned to the container and the other side is attached to a bridge specified by the lxc.network.link. If the bridge is not specified, then the veth pair device will be created but not attached to any bridge. Otherwise, the bridge has to be setup before on the system, lxc won't handle any configuration outside of the container. By default lxc choose a name for the network device belonging to the outside of the container, this name is handled by lxc, but if you wish to handle this name yourself, you can tell lxc to set a specific name with the lxc.network.veth.pair option., the device never communicates with any other device on the same upper_dev (default), vepa,, or bridge, Specify a path to a file where the console output will be written. 100 1 to have LXC mount and populate a minimal /dev when starting the container. ENABLE KMSG SYMLINK Enable creating /dev/kmsg as symlink to /dev/console. This defaults to 1. lxc.kmsg Set this to 0 to disable . lxc.mount specify a file location in the fstab format, containing the mount information. If the rootfs is an image file or a block device and the fstab is used to mount a point somewhere in this rootfs, the path of the rootfs mount point should be prefixed with the /usr/lib/i386-linux-gnu/lxc default path or the value of lxc.rootfs.mount if specified.:ro (or sys): mount /sys as read-only for security / container isolation purposes. · sys:rw: mount /sys as read-write · cgroup:mixed (or cgroup):-full:mixed (or cgroup-full):.pivotdir where to pivot the original root file system under lxc.rootfs, specified relatively to that. The default is mnt. It is created if necessary, and also removed after unmounting everything from it during container setup.), lxc.cap.keep Specify the capability to be kept in the container. All other capabilities will be dropped. APPARMOR PROFILE If lxc was compiled and installed with apparmor support, and the host system has apparmor enabled, then the apparmor profile under which the container should be run can be specified in the container configuration. The default is lxc-container-default. lxc.aa_profile Specify the apparmor profile under which the container should be run. To specify that the container should be unconfined, use lxc.aa_profile = unconfined. lxc.se_context Specify the SELinux context under which the container should be run or unconfined_t. For example lxc.se_context = unconfined_u:unconfined_r:lxc_t:s0-s0:c0.c10.post-stop A hook to be run in the host's namespace after the container has been shut down. lxc.hook.clone A hook to be run when the container is cloned to a new one. See lxc-clone(1) for more information..group A multi-value key (can be used multiple times) to put the container in a container group. Those groups can then be used (amongst other things) to start a series of related containers.> Mon Apr 14 15:52:20 UTC 2014 lxc.container.conf(5) | http://manpages.ubuntu.com/manpages/trusty/man5/lxc.container.conf.5.html | CC-MAIN-2017-47 | refinedweb | 516 | 56.86 |
Re: Deployment under Vista
- From: "Sally" <me@xxxxxxxx>
- Date: Sun, 29 Jul 2007 06:28:51 +1000
Makes sense, although interestingly in aus.electronics (I'm in Aus) if
anyone dares to top post (although it is a technical group) heaps of
regulars - and abusive ones at that -- duly flame.
"BeastFish" <no@xxxxxxxx> wrote in message news:f8g7qn$fgs$1@xxxxxxxxxxx
Usually, top posting is appreciated in technical groups because one does
not
need to scroll down to see the reply. But when replying to a reply that
was
bottom posted (or vice versa), the appropriate courtesy is to "go with the
flow" as to not be confusing.
"Sally" <me@xxxxxxxx> wrote in message news:46ab94f6@xxxxxxxxxxxxxxxxxxxx
Thanks. That's made CDSIL clearer to me. BTW, your top post actuallylogged
confused me! Was it your way of teaching me that it's best not to?
"Randy Birch" <rgb_removethis@xxxxxxxx> wrote in message
news:0D5687C7-622C-4AB0-BC3E-BC14C1B5796C@xxxxxxxxxxxxxxxx
Under the user security model of Vista, applications can not write to
folders outside the folders assigned to the user's profile. This is a
combination of a special path designated by Windows and the current
aaon user's name. For example, my app-friendly folder is rooted at
c:\users\birchr\application data\<program name>.
Now, note I use drive C for my Windows drive, as do a good number of
people
but not all. So while in my case it would be safe to presume the
initial
path is c:\users\, this is not always the case so must be determined on
extremelyextremelyper-machine basis. Similarly the user's name must also be determined in
some
fashion.
Thankfully Windows provides a mechanism - which I warn will look
TechnicallyTechnicallycomplicated to a newbie - to determine these types of things.
andandthe roots used to be called namespaces in Windows parlance; we (users
therethereespecially developers) typically call them "special folders". In
Windows
95
through XP, each special is represented by and accessed using a special
constant and a Windows API or two, and these constants are prefaced
with
the
letters CSIDL, which stands for "constant special item ID list". Under
Vista the name was changed (somewhat gratuitously I might add) to
become
"known folders" - I guess "special folders" was too technical for
users.
Consequently, under Vista CSIDLs were changed into KNOWNFOLDERIDs,
although
a CSIDL is backwards compatible. IOW, your app can call CSIDL values
and
get
the expected results under Vista, which is a Good Thing since I've not
updated my site with the KNOWNFOLDERID constants.
Anyhow, once you get your head around this you have to realize that
defineddefinedare two types of "special" (or "known") folders - those that return
disk
paths (such as CSIDL_MYDOCUMENTS), and those that return non-physical
locations that can not be accessed by simply using standard path/file
syntax
(such as CSIDL_BITBUCKET - aka the recycle bin).
So ... this is a long introduction to the point: Under Vista it is your
app's responsibility to only create working files in the folder(s)
assigned
to the user and marked as accessible for read/write by applications.
The
Program Files, while it can be used as an installation path by someone
with
admin rights (under Vista), is off-limits for the day-to-day record
keeping
a file might do. So is the Windows folder, which was traditionally
obtainobtainby Microsoft as *the* place for saving INI files (application settings
files - ini standing for initialization).
In order to ascertain the correct path to write to under any Windows
version, and especially Vista, you have to use** the Windows API to
folderfolderthe correct paths - a demo of this is presented at. (Check out the parent
codecodetoo for some other cool things that CSIDLs can be used for. And for
withwiththat just get's the name of the currently logged on user - not needed
oftenoftenthe above since the user name is included in the data returned, but).).useful nonetheless, see
toto
** Qualification: you don't *have to* -- one could potentially consent
thatthata
level of flagellation that would astound even Silas, by blurting out
path.path.another possibility might be considering the use of the shell object to
get
this info. But I am not one to enjoy suffering, especially given the
torrent
of adverse, severe (and possibly fatal but definitely deserved)
ridicule
and
abuse that would come forth from the more knowledgeable participants of
this
group for making such a sacrilegious suggestion. So please note that I
have
not advocated nor supported that you undertake this self-destructive
efforteffort
--
Randy Birch
MS MVP, Visual Basic
Please respond to the newsgroups so all can benefit.
"Sally" <me@xxxxxxxx> wrote in message
news:46aa4934$1@xxxxxxxxxxxxxxxxxxxx
Firstly, many thanks to those who replied in previous threads in an
(inevitably)(inevitably)to guide this idiot along.
Unfortunately the sheer volume of information and some of the
memediffering opinions made me even more confused. People who have pointed
disagreeddisagreedat
what to them are obvious things understandably get annoyed when I
appear
not
to have noted them, but in some cases their assertions have been
threadthreadwith by others. Added to that is my habit of replying on the fly, when
there
are more posts to come. Also, I haven't figured out a way of going back
through a thread, so once I've replied to the last post I lose the
justjust(literally). I am using CTRL-U to advance through unread messages and
my
Outlook (NOT Outlook Express) apparently does not easily allow me step
back.
As for VB, I've learned a few things, I hope, and I'll summarise here
WizardWizardhow far I've got and see if there's a way forward that's uncomplicated.
Let's call my app Fred.exe and it uses say two random access files,
Ledger.Dat and Transact.Dat. (not difficult to fathom out what the app
does!).
Anyway, this app works perfectly under XP Home. I have used the PDW
waswasto deploy it, and the result is
1) The Install suggests that the app is placed in C:\Programs. I accept
that
default.
2) As for the data files, I have hardcoded within my app to create a
directory C:\FredsData, and then lines like this
Open "C:\FredsData\Ledger.Dat" For Random As #1 Len=255
As I said, it all works under XP Home.
Now, a colleague is running Vista and I want to post him a CD so that
he
can
install my app and use it. BUT, I don't want to risk messing up his
system,
which runs many critical applications. It is not so serious if he has
trouble running mine. But it would certainly be serious if in
attempting
to
install mine some files or settings on his Vista were messed up. That
obviousobviousmy
anxiety.
So far, I think I have gleaned that
A) Any files apart from my app that PDW bundles on the deployment disc
wail
not overwrite any of his that might be later (that cures one worry...
IF
I've got that right.
B) The app should not go in root (meaning C: direct). In that respect
perhaps Vista makes a similarly appropriate default suggestion to the
Programs directory in XP Home?
C) The data files should not go in root. In that respect, perhaps my
hardcoded C:\FredsData\... will do the trick?
A, B and C really encapsulate my concerns.
If anyone has the time and patience to reply in as uncomplicated way as
possible, I would be grateful. For those who are fed up with this
stupid
bird please do not reply! I guess if no-one replies the message is
PerhapsPerhapsand maybe I'll go back to my crochet (sniff)... no knitting here.
downdownthat challenge will encourage the experts who have difficulty coming
tototo
a newbie's level not to reply. At times I felt like a 6 year old trying
resonantresonantunderstand how subtraction works being given the formula for the
aboutaboutfrequency of an LC combination by a learned professor who knows all
electronics and nothing about 6 year olds.
.
- Follow-Ups:
- Re: Deployment under Vista
- From: Ralph
- References:
- Deployment under Vista
- From: Sally
- Re: Deployment under Vista
- From: Randy Birch
- Re: Deployment under Vista
- From: Sally
- Re: Deployment under Vista
- From: BeastFish
- Prev by Date: Re: Deployment under Vista
- Next by Date: Re: Deployment under Vista
- Previous by thread: Re: Deployment under Vista
- Next by thread: Re: Deployment under Vista
- Index(es): | http://www.tech-archive.net/Archive/VB/microsoft.public.vb.general.discussion/2007-07/msg02789.html | crawl-002 | refinedweb | 1,412 | 59.74 |
I have these entities;
Person.class
@Entity @Table(name = "person") @Inheritance(strategy=InheritanceType.JOINED) public abstract class Person implements Serializable { @Id @GeneratedValue(strategy = GenerationType.SEQUENCE) @Column(name = "person_id") private long personID;
StudentBean.class
@Entity @Table(name = "student_information") public class StudentBean extends Person { @Id @GeneratedValue(strategy = GenerationType.SEQUENCE) private long studentID;
My goal is to save a student record in 1 table together with the details in the Person class and StudentBean class, but I have encountered these exception
org.hibernate.mapping.JoinedSubclass cannot be cast to
org.hibernate.mapping.RootClass
on my research I stumbled upon this thread [Spring 3.1 Hibernate 4 exception for Inheritance cannot be cast to org.hibernate.mapping.RootClass
Based from it, the reason for the exception is due to the @Id in the Parent and Child class. But if I removed the @Id in either of my class, it would not allow me to save the object since I don’t have an identifier with my entity.
I want a Person ID to identify a person outside the school and a Student ID to identify a person inside the school, is this possible? if not, how can I remove the PK of the Person class but still I can extend it and save it into another table.
lastly, what is the best way to implement this design, Person can be extended for Student Class, Teacher Class, SchoolPrincipal Class
Your mapping is wrong. You cannot have
@Id in a superclass and then add another
@Id in a subclass. The “id” should uniquely identify any object in an inheritance tree (and “Student” is also a “Person”).
If you want to have different types of “person” you could have an extra field in Person about whether they are in education. But don’t see how that should be part of the “id”
The error message is stupid and tells you nothing about the real cause; take that part up with Hibernate developers.
###
Since the inheritance type is joined, it means you have a strategy that will create a table for each of the classes referencing one another through the use of foreign keys.
The superclass must therefore be a @MappedSuperClass and not an Entity
Tags: class, hibernate, join | https://exceptionshub.com/org-hibernate-mapping-joinedsubclass-cannot-be-cast-to-org-hibernate-mapping-rootclass.html | CC-MAIN-2022-05 | refinedweb | 371 | 53.81 |
Tracking Undo/Redo-History for the custom attributes
- RafaŁ Buchner last edited by gferreira
Hey Guys,
I have a class, with a list as a custom attribute:
from mojo.events import EditingTool, installTool class MyTool(EditingTool): def __init__(self): super().__init__() self.specialPoints = [] # My Custom Attribute
This attribute will contain set of points that should be selected by a user.
At some point this class adds selected points to this list
def additionContextualMenuItems(self): return [("Create Special Point", self.createSpecialPoint)] def createSpecialPoint(self, sender): g = CurrentGlyph() for p in g.selection: self.specialPoints.append(p)
Is there any way to track this action, so If the user will use Undo/Redo, the points will be deleted form the list or re-added to the list in the right order? Also this Undo/Redo should still track other actions of course.
I think I should use
didUndo(), but how?
I was thinking of tracking changes by adding the labels, but changes in the labels are not tracked by history related events.
Thanks in advance for the help!
you have to wrap your actions inside:
glyph.prepareUndo("Moved By 100, 100") glyph.moveBy((100, 100) glyph.performUndo()
in the next release (RF3.2) you can use:
with glyph.Undo("Moved By 100, 100") glyph.moveBy((100, 100)
- RafaŁ Buchner last edited by
a new version of Undo looks really nice. Unfortunately, It doesn't work in my case.
This action is not connected with a glyph or a font, so I cannot track it with prepare/performUndo.
I will maybe try to use a font.lib dictionary, although I don't know if it can be tracked. I will keep you posted if it worked
store your data in the
glyph.libif you want undo powers
the
font.libis also possible but then its hard to have the global undo manager focus. The current undo manager is connected to the current glyph.
good luck! | https://forum.robofont.com/topic/553/tracking-undo-redo-history-for-the-custom-attributes | CC-MAIN-2020-16 | refinedweb | 322 | 68.16 |
chdir - change working directory
#include <unistd.h> int chdir(const char *path);
The chdir() function causes the directory named by the pathname pointed to by the path argument to become the current working directory; that is, the starting point for path searches for pathnames not beginning with /.
Upon successful completion, 0 is returned. Otherwise, -1 is returned, the current working directory remains unchanged and errno is set to indicate the error.
The chdir() function will fail if:
- .
The chdir() function may fail if:
- [ENAMETOOLONG]
- Pathname resolution of a symbolic link produced an intermediate result whose length exceeds {PATH_MAX}.
None.
None.
None.
getcwd(), <unistd.h>.
Derived from Issue 1 of the SVID. | http://www.opengroup.org/onlinepubs/007908799/xsh/chdir.html | crawl-001 | refinedweb | 111 | 55.84 |
#include <VideoInputGst.h>
Class used to hold enumerated information about usable video formats.
Constructor for the WebcamVidFormat class.
Constructor for the WebcamVidFormat class. This constructor prepares the data structure for data that will come in later. All gint values are initialized to -1 to show that these values have never been set.
References numFramerates.
Pointer to a FramerateFraction class which simply holds a temporary framerate variable while trying to determine the highest possible supported framerate for the format described in the mimetype var.
Contains a gint value describing the height of the selected format.
Holds the highest_frame supported by the format described in the mimetype var.
Contains a gchar* which describes the raw video input stream from the camera formated in a Gstreamer video format type (e.g. video/x-raw-rgb or video/x-raw-yuv).
Contains a gint value representing the number of framerate values supported by the format described in the mimetype var.
Referenced by WebcamVidFormat().
Contains a gint value describing the width of the selected format. | https://www.gnu.org/software/gnash/manual/doxygen/classgnash_1_1media_1_1gst_1_1WebcamVidFormat.html | CC-MAIN-2021-25 | refinedweb | 170 | 50.33 |
Ok so I finaly figured it out, so apparently even if SELinux is disabled it still reads the policy rules, so
reading the audit.log I saw the following
type=AVC msg=audit(1144271107.899:2774): avc: denied { name_connect } for pid=31413 comm="httpd"
dest=25 scontext=root:system_r:httpd_t tcontext=system_u:object_r:smtp_port_t tclass=tcp_socket
type=SYSCALL msg=audit(1144271107.899:2774): arch=40000003 syscall=102 success=no exit=-13 a0=3
a1=bf975610 a2=591114 a3=b6895f38 items=0 pid=31413 auid=0 uid=48 gid=48 euid=48 suid=48 fsuid=48
egid=48 sgid=48 fsgid=48 comm="httpd" exe="/usr/sbin/httpd"
So by running the following command
[root]# audit2allow -i /var/log/audit/audit.log -l
This will print out a list of things that were denied, I found that the following was the most relevant one
"allow httpd_t smtp_port_t:tcp_socket name_connect;"
I then downloaded the SELinux source by doing
[root]# yum install selinux-policy-targeted-sources
Then
[root]# vi /etc/selinux/targeted/src/policy/domains/misc/local.te
And added the "allow httpd_t smtp_port_t:tcp_socket name_connect;"
Then you simply reload SELinux
cd /etc/selinux/targeted/src/policy/
make load
That changed the error to Authentication Error. So Trac really should be more specific about this (like
tell you what fields are required), you DO need the following fields even if they are blank.
Smtp_user =
Smtp_password =
And simply leave them blank.
Another issue was the fact that I only had '127.0.0.1 localhost' in my /etc/hosts file and apparently
Python requires you to have an extra alias, so I added '127.0.0.1 MachineNameHere localhost'. After
all that email notifications finally work, now I only gotta get my mailserver to stop marking them as spam :).
Anyways thanks a lot everyone.
James
________________________________________
From: James Molina
John, I followed your suggestion and got 2 different .py scripts for
testing, I ran the following two and they both worked (after a bit of
tweaking to my HOSTS file) but Trac still throws same error. Also I have
SELinux disabled until I figure this issue out, so I do not believe it's
an issue with that. Got any other ideas?
***********************************************************('127.0.0.1')
server.set_debuglevel(1)
server.sendmail(fromaddr, toaddrs, msg)
server.quit()
*************************************************************
Also tried the following script
import smtplib
message = 'blah blah blah'
SENDER = 'me at mydomain'
RECIPIENT = 'my email addr here'
server = smtplib.SMTP('localhost')
response = server.sendmail(SENDER, RECIPIENT, message)
server.close()
print str(response)
***************************************************************
I used the user Apache to test (had to modify the passwd file since by
default it has a shell disabled) and they both work fine.
James
James Molina wrote:
>
> self.server = smtplib.SMTP(self.smtp_server, self.smtp_port)
> File "/usr/lib/python2.4/smtplib.py", line 241, in __init__
> (code, msg) = self.connect(host, port)
> File "/usr/lib/python2.4/smtplib.py", line 303, in connect
> raise socket.error, msg
> error: (13, 'Permission denied')
This is a socket error. It's saying that you aren't allowed to connect
using the socket. That means that it's an operating system issue. I
would suspect that it's some selinux funness. Try sending an email as
the user that trac runs under (probably apache or www-data). Also, as
said user, fire up a python shell and import smtplib and use that to
send a test message.
-John | http://article.gmane.org/gmane.comp.version-control.subversion.trac.general/7518 | crawl-002 | refinedweb | 564 | 57.47 |
Creating windows with AIR is an extremely simple process. There are two different types of AIR application windows. The NativeWindow is a lightweight window class that falls under the flash class path and can only add children that are under the flash class path. The mx:Window component is a full window component that falls under the mx namespace and therefore can include any component under the mx namespace.
Since NativeWindow falls under the flash.display package, it can be used in any Flex, Flash, or HTML AIR project. NativeWindows have many properties that can alter their functionality and look. The following example will create a basic NativeWindow and build on the same file, creating different versions of the NativeWindow.
Start by creating a new AIR project named Chapter9_NW, which will create a new application file named Chapter9_NW.mxml. This will look like Listing 9-1.
Now add a new script block by adding the code from Listing 9-2. This function will create a default new NativeWindow object by passing a new NativeWindowInitOptions() object into the constructor. Next the title, width, and height are set. A new TextField is created and added to the NativeWindow by calling the stage.addChild() method. The contents of the NativeWindow ...
No credit card required | https://www.oreilly.com/library/view/beginning-adobe-airtm/9780470229040/9780470229040_creating_windows.html | CC-MAIN-2018-51 | refinedweb | 211 | 66.94 |
If you've ever had to display some sort of Icon on a webpage, more than likely you've used or seen Font Awesome. Font Awesome is an awesome (heh) toolkit that provides a rich set of icons and logos.
The amazing team over at Font Awesome provides a nice React component that makes adding these icons to your React application super simple.
Prerequisites
In order to follow along in this tutorial, you'll need to set up a React application. The quickest and easiest way to do this would be to use
create-react-app.
In case you aren't familiar with
create-react-app, I have a tutorial here that walks you through the steps to get a basic project set up and running
Installing Font Awesome
Once you've got your React application started, we'll need to install the libries Font Awesome provides:
# SVG Rendering Library npm i --save @fortawesome/fontawesome-svg-core # The set of icons Font Awesome provides npm install --save @fortawesome/free-solid-svg-icons # The actual React component we will be using npm install --save @fortawesome/react-fontawesome
This will install all the pieces necessary to load up and render the icons you want to use.
There are a bunch of other sets of icons in different styles you can install, including the Pro icons. For our purposes here, we'll stick to the solid-style free icons and logos.
NOTE: To use Pro icons, you will need a paid Pro account.
Using an Icon
Now that we've got all of our packages installed, it's time for the fun part! Let's throw some icons on the page!
For now, let's open up our
App.js file. This file should just contain the boilerplate JSX
create-react-app provides. Let's go ahead and get rid of everything in the main
header tag so we have a clean slate but keep some styling.
In order to throw an Icon on this page, we're going to need to import the
FontAwesomeIcon component we installed and an SVG Icon to render. Let's use the
fa-rocket icon. Then we can render out that component and give the icon we want to use.
import './App.css'; // Font Awesome Imports import { faRocket } from '@fortawesome/free-solid-svg-icons'; import { FontAwesomeIcon } from '@fortawesome/react-fontawesome'; function App() { return ( <div className="App"> <FontAwesomeIcon icon={faRocket} /> </div> ); } export default App;
NOTE: The icons exported from the icon libraries are in Camel Case
The output of that should look something like this:
Of course, Font Awesome has a ton of different styles and sets of icons to choose from, however because a lot of those require a Pro account I will leave those out of this tutorial.
If you're interested in using these, take a look at their docs
Setting up an Icon Library
What happens if we have a TON of icons we want to use? Will we have to re-import them everywhere we want to use them?
Great question! The answer is no, Font Awesome provides a way to create a
library of icons that become globally available to the application after being imported.
To set that up, let's first create a new file called
fontawesome.js
We'll add the library setup into this file:
// Import the library import { library } from '@fortawesome/fontawesome-svg-core' // Import whichever icons you want to use import { faRocket, faHome } from '@fortawesome/free-solid-svg-icons' // Add the icons to your library library.add( faRocket, faHome )
Here we are picking out the icons we want and adding them to out "library" that will become globally available after we put this file to use.
NOTE: You can also
import * as Icons from '@fortawesome/free-solid-svg-icons';and map those into your library to get all of the icons, but the bundle size is huge! Best to just pick the ones you know you'll need.
So, we've got a library. Let's use it. Over in your
index.js file we're going to import that
fontawesome.js file so that it runs when the application starts.
import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import App from './App'; import reportWebVitals from './reportWebVitals'; import './fontawesome'();
And that's all the setup for the library! The icons you put into your library should now be globally available. The only thing that's changed is how we specify our icons when rendering a
<FontAwesomeIcon> component. Let's take a look back in the
App.js file
import './App.css'; // NOTICE we don't need to import the individual icons! import { FontAwesomeIcon } from '@fortawesome/react-fontawesome'; function App() { return ( <div className="App"> <header className="App-header"> <FontAwesomeIcon icon={['fa', 'rocket']} /> <br/> <FontAwesomeIcon icon={['fa', 'home']} /> </header> </div> ); } export default App;
We no longer need to import each individual icon into our component! Also, in the
<FontAwesomeIcon> itself, rather than passing it an icon, we will pass it an array. This array should have:
Conclusion
And there you have it! You can now use icons as you please throughout your application.
There are other configuration options and attributes you can apply to these icons that are described in Font Awesome's docs that I highly recommend checking out!
Thanks for reading, and have fun throwing all the icons you can onto your next React webpage 😎
P.S. If you liked this article, be sure to follow me on Twitter to get updates on new articles I write
Discussion (2)
Nice article, but better not/avoid using Font Awesome with React apps. It adds a lot of weight to the final build. And I guess at the end you gonna use 10 icons max. It's better to build your own system that generate font icon from svgs.
Good point! The Font Awesome packages comes with the functionality to build out a "library" of only the icons you plan to use to keep the bundle size down when you build for production. I mention that and show the basic setup of a library in this article, but there are more details here if you're interested 👇
fontawesome.com/how-to-use/on-the-... | https://dev.to/sabinthedev/using-font-awesome-icons-in-a-react-application-lpe | CC-MAIN-2021-31 | refinedweb | 1,039 | 69.92 |
NAME
daemon - run in the background
SYNOPSIS
#include <unistd.h> int daemon(int nochdir, int noclose); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): daemon(): _BSD_SOURCE || (_XOPEN_SOURCE && _XOPEN_SOURCE < 500)
DESCRIPTION
The daemon() function is for programs wishing to detach themselves from the controlling terminal and run in the background as system daemons. If nochdir is zero, daemon() changes the calling process's current working directory to the root directory ("/"); otherwise, the current working directory is left unchanged. If noclose is zero, daemon() redirects standard input, standard output and standard error to /dev/null; otherwise, no changes are made to these file descriptors.
RETURN VALUE
(This function forks, and if the fork(2) succeeds, the parent calls _exit(2), so that further errors are seen by the child only.) On success daemon() returns zero. If an error occurs, daemon() returns -1 and sets errno to any of the errors specified for the fork(2) and setsid(2).
CONFORMING TO
Not in POSIX.1-2001. A similar function appears on the BSDs. The daemon() function first appeared in 4.4BSD.
NOTES
The glibc implementation can also return -1 when /dev/null exists but is not a character device with the expected major and minor numbers. In this case errno need not be set.
SEE ALSO
fork(2), setsid(2)
COLOPHON
This page is part of release 3.27 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.ubuntu.com/manpages/oneiric/man3/daemon.3.html | CC-MAIN-2015-35 | refinedweb | 247 | 55.95 |
Beekeepers and scientist alike
Published:
Honeybees have an effect on our lives from the start of the day to the end, yet they have been a mystery to beekeepers and scientist alike. From their believed impossible flight to their method of communication, bees have been studied in almost every aspect. Whenever a mystery is solved or studied extensively though, another question arises that becomes the new focus. This time the mystery is not something that mankind can take their time to find a solution. Honey bee populations have been in a steady decline for the past five decades, yet in recent years, populations have dropped dramatically. Beekeepers around the world through a survey that reported losses that calls for some concern during the winter of 2006 and the spring of 2007(vanEngelsdorp et al. 2007, 2008). In the past, this phenomenon has been called spring/fall dwindle disease, autumn collapse, and disappearing disease. It is believed that the current event of bee disappearance is similar to those of the past, but it is impossible to tell if they are the same. Scientist now terms this occurrence of mass honeybee deaths as colony collapse disorder (CCD). The change in name was due to the fact that the previous names were misleading. The word dwindle implies a steady decline which is not the case and the word disease is often associated with bacteria. The cause of CCD is currently unknown, so the name will likely change when new evidence is found. As of now there is no single cause can be attributed to CCD, but instead it is believed that many factors play a role in the collapse, which ranges from natural diseases to the way the bees are managed. Our high dependence of honeybees to pollinate our crops means that if bees disappear than our diet will also change dramatically. The death of honeybees will also likely result a cascading effect, which will have irreversible damages on ecosystems where natural pollinators have either died off or left.
Professional
Essay Writers
Get your grade
or your money back
using our Essay Writing Service!
Colony collapse disorder has effected honeybee populations across the globe, but it is hard to accurately estimate the losses. The difficulty in determining the losses in the United States alone came from the fact that only fifteen states responded to the survey done in 2007 and two of those states were unable to complete the survey because of bad weather conditions. When asked in the survey, respondents considered their losses to be abnormal when losses were over 40% of the colonies. In this survey the total loss was 31.8% of all reporting beekeepers and an average loss of 37.6% (vanEngelsdorp et al. 2007).
The survey was done again in 2008 with twenty states responding to the survey. Again some states did not participate because they lacked inspection programs, necessary resources, or felt it was bad timing to obtain appropriate information. The survey included 19.4% of the 2.44 million honey-producing colonies in the United States. Of those studied it was found that the average loss was 31.3% with a total loss of 35.8% (vanEngelsdorp et al. 2008). vanEngelsdorp (2008) wrote that if 'these surveys be representative of the losses across all operations, this suggests that between 0.75 and 1.00 million colonies died in the United States over the winter of 2007-2008.?
It was found that respondents reported normal losses of 21.7% compared to that of 15.9% in the 2007 survey. This shows that beekeepers are expecting higher loss rates than in previous years. Losses were also were different in every state and at the same time had no pattern which is seen in the figure 1. Also when responding to the 2008 survey, the top reasons thought to be the cause of the deaths were poor quality queens and starvation, which are both manageable problems.
The affects of colony collapse disorder has a shroud of mystery surrounding it as they are not seen in the other diseases that honeybees face. In a collapsing colony the number of adult worker bee decreases rapidly. The death of adult workers causes the workforce to become mostly comprised of younger adult bees. The strange part about this is that the adult population disappears without any accumulation of dead bodies in or around the hive. The younger workers without the adults are unable to maintain the amount of brood in the colony as there are simply not enough of them. Even stranger still, the adult bees disappear leaving behind their brood (Johnson 2008). This behavior is uncharacteristic as bees are social insects and colony based. When bee's swarm they also suddenly disappear, but in this case the original queen bee is left behind. Also the queen is still healthy and laying eggs and the few workers left behind do not eat even when fed by beekeepers (Ellis 2007). Often if a queen bee is old or diseased the workers will replace the queen, but this is not the case in colonies with CCD.
Comprehensive
Writing Services
Plagiarism-free
Always on Time
Marked to Standard
Each colony though suffers from different diseases, so it is hard to pinpoint the exact symptoms of CCD (Cox and vanEngelsdorp 2009). A colony of honeybees with colony collapse disorder is often characterized with a complete absence of adult workers and the presence of capped brood and food stores (Ellis 2007). Adult worker bees have disappeared, but there are no dead bodies around or near the hive. The capped off food stores shows that there were still ample amounts of food left. Collapsed colonies are not taken over by neighboring bees. Common pests of honeybees often do not raid or have a delayed raid for colonies that have collapsed (Ellis 2007). The wax moth, which often causes damage to weak or dead colonies, would normally raid a colony that had absent adult workers. The small hive beetle is another pest that normally will attack even healthy hives, but will have delayed attacks on colonies with CCD. Since raids are delayed, it shows that either the colonies either collapse very quickly or the pest knows that something is different in a colony with CCD.
Honeybees are constantly under some kind of stress, whether it be from nature or mankind. Colony collapse disorder is believed to not come from a single source, but according to Cox-Foster and vanEngelsdorp (2009), 'the bees were all sick, but each colony seemed to suffer from a different combination of diseases.' These stresses are also new to the bees in many cases and others have been affecting bees for many generations. The causes of CCD are believed to range from being a disease, parasite, chemical, and bee management itself.
It is evident that wild honeybee's environmental stresses are minimal as bees create their hives using their instincts. These instincts have gone through the test of natural selection, thus they are what should be the 'best' when it comes to finding and making hives. Individual hives must face their own problems (predators, parasites, etc.), so they develop their own natural defenses independent of other colonies. If the colony happens to die, then pests are able to destroy the hive and allow a new hive to become replaced in the same location.
Honeybees are affected by a number of diseases and parasites, most of which do not relate to CCD, but may still be a factor. The American (AFB) and European foulbrood (EFB) are the two major bacterial diseases that affect honeybees (Oldroyed 2007). These bacteria only affect brood of up to three days old, so it is unlikely to be a cause as the adult workers disappear in colonies with CCD.
Varroa mites at first was a prime suspect for CCD as they 'were responsible for a 45% drop in the number of managed colonies worldwide between 1987 and 2006 (Cox-Foster and vanEngelsdorp 2009).' The varroa mite infests brood cells and lives on adult bees for movement from one place to other (Oldroyed 2007).Widespread mite infections though are easy to spot for a trained beekeeper. They may still be a factor of CCD since they carry viruses and inhibit the immune responses of honeybees.
There is a great variety of viruses that affect adult bees believed to play a role in CCD. Nesema Apis is a protozoan that infects the gut of adult bees that causes dysentery. .Already many adult bees carry symptomless viral infections (Oldroyed 2007). These viral infections stay symptomless unless the bees are subjected to stresses. These stresses include lack of food, weather conditions, improper habitat, and parasites. Also a likely culprit of CCD is a new and unidentified disease. In 2006 a new nesoma species (Nesema cerana) was found that displayed one of the symptoms of CCD. When N. cerana is found in elevated levels, the honeybees leave their colonies and never return (Ellis 2007).
Also very unnatural to honeybees is the new array of chemicals they are subjected to each day. Ever since the introduction of the varroa parasite, beekeepers must control the mite population chemically. As the mites develop resistances to the pesticides, beekeepers are either increasing doses or trying cocktails of chemicals. These chemicals may also remain in a hive for periods of time as according to Oldroyd (2007), fluvaline and other chemicals can build up and stay in comb wax. A new type of pesticide, neonicotinoids, would enter the pollen and nectar of plants and not just the leaves. This chemical is believed to have a link to CCD as research has shown that it decreases the honeybees' ability to remember how to get back to the hive (Cox-Foster and vanEngelsdorp 2009). Another study though has found that the pesticides may have no effect on the bee's ability to live. Colonies were fed Imidacloprid (a neonicotinoid) in the form of syrup or pollen according to the does found in the field (Oldroyd 2007).
This Essay is
a Student's Work
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work
Honeybees are subjected to stresses they would otherwise not experience when under human care. Beekeepers are basically putting hives into uninsulated an oversized boxes with pre determined cell sizes. The spaced out removable frames and large entrance results in an increase in the air flow inside. This in turns causes the bees to use more energy to maintain the temperature and humidity. Beekeepers often alter the diet of a colony and breed for selected traits. In this environment the honeybees must face stresses they would not in the wild and bees with a 'weak' genetic base survive have an increased chance of surviving. Also the number of variety in flowers has lowered as human's need for things to look 'neat'. Humans plant large areas of one crop without weeds or flowers. A pollinator that only feeds on one type of crop may be lacking in important nutrients if it were to feed on multiple crops. This deficiency in nutrients leads to a weakened natural defense. | https://www.ukessays.com/essays/biology/beekeepers-and-scientist-alike.php | CC-MAIN-2016-44 | refinedweb | 1,870 | 61.56 |
go to bug id or search bugs for
Hello,
I've experienced a compilation error and some huge performance problems setting up an ODBC connection via Openlink ODBC drivers.
I've configured my Php:
./configure --with-openlink=/usr/src/openlink
When compiling Php, it complains about missing the iodbc.h udbcext.h header files which are not included in the SDK software package of Openlink.
When I remove the two above include files from ./ext/odbc/php_odbc.h line 125 128 the compilation works fine without errors or warnings, and am able to etablish a connection to my Openlink drivers.
When I query my DB from out of Php via Openlink, a simple query takes huges amounts of time, while the same query is very fast using the included odbctest utility from the Openlink SDK package.
I've run a query via Php and the odbctest utility and compared the two debug files and saw that Php uses the ExtendedSQLFetch C- function. The odbctest utility uses an 'normal' SQLFetch function.
So I have changed my ./ext/odbc/php_odbc.h file line 124 from:
#elif defined(HAVE_OPENLINK) /* OpenLink ODBC drivers */
#define ODBC_TYPE "Openlink"
#include <iodbc.h>
#include <isql.h>
#include <isqlext.h>
#include <udbcext.h>
#define HAVE_SQL_EXTENDED_FETCH 1
#define SQLSMALLINT SWORD
#define SQLUSMALLINT UWORD
to:
#elif defined(HAVE_OPENLINK) /* OpenLink ODBC drivers */
#define ODBC_TYPE "Openlink"
// #include <iodbc.h>
#include <isql.h>
#include <isqlext.h>
// #include <udbcext.h>
// #define HAVE_SQL_EXTENDED_FETCH 1
#undef HAVE_SQL_EXTENDED_FETCH
#define SQLSMALLINT SWORD
#define SQLUSMALLINT UWORD
With this small change I was able to compile my Php succesfully and query my Database via the Openlink Package very fast!
Regards,
Anne van der Velden
Correct Express
The Netherlands
Add a Patch
Add a Pull Request
before i commit such a patch, what version of OpenLink are you using?
I'm using the Openlink Data Access Driver Suite (Multi Tier Edition) Version 4.0, Connecting to a Progress 8.3B Database running on an AIX Unix Box.
this is a workaround for getting around the overhead required when server-side cursor libraries are in use.
OpenLink 4.0 implements a mixed cursor library in the server-side components.
Server-side cursors should be used sparingly for this reason
-Stephen Schadt
This really isn't so much a bug, as a feature request. Marking it as so...
After speaking (briefly) with Stephan Schadt it is believed that moving from the current cursor method (server-side) to something of a more mixed case is a GoodThing(TM).
We'll see if my free time allows this to make it in for 4.1 or not.... | https://bugs.php.net/bug.php?id=10549 | CC-MAIN-2021-17 | refinedweb | 431 | 66.64 |
Example of Synchronous vs Asynchronous APIs in SAP S/4HANA Cloud
In this blog, we’ll take a look at the difference between synchronous and asynchronous APIs as they relate to SAP S/4HANA Cloud with an example.
First, we should define what these terms asynchronous and synchronous mean since it is often a source of confusion. In general, a synchronous API will have the calling system wait for a response from the target system. That is to say, the source system waits for the outcome of the operation requested and any associated payload. For example, the completed payload with any success or error codes. On the other hand, for an asynchronous call, the calling system (i.e. source) does not wait for a response from the target system, it continues on processing the application or interface logic. It’s more of a send and forget type scenario. In an asynchronous scenario, the callign system doesn’t know if there is an error, it simply delivers the payload.
As a API example, think of video conferencing and instant messaging as synchronous (i.e. real time interaction) and text messaging and email as asynchronous (message is sent and at some point in time the information will be picked up and processed by the receiver).
Each has several advantages and we’ll take a look at one of the biggest differences in terms of how they are monitored in this blog.
For this blog, we’ll explore a API that was released in 1808 to the existing Journal Entry API that is commonly used by customers to post journal entries into SAP S/4HC. The Post Journal Entry on SAP API hub now has 2 SOAP based services for posting journal entries–one is asynchronous (new) and the other synchronous (existing).
I setup SAP_COM_0002 Communication Arrangement in my SAP S/4HC system and you can see both endpoints shown under Inbound Services.
In order to demo the differences, I created 2 iFlows on SAP Cloud Platform Integration (CPI) that post a single journal entry to S/4HC from a JSON payload submitted using Postman. Both have the very simple flow as shown below in the screenshot. A JSON payload is submitted from Postman, converted to XML, a message mapping step then occurs followed by the web service call. The groovy scripts only log the payloads for demo purposes in this blog. The two iFlows are identical except for the service they call in SAP S/4HC.
One of the biggest differences between the two is how these interfaces are monitored. In a synchronous scenario, CPI needs to analyze the response and determine if the journal entry post was successful. In an asynchronous scenario, the monitoring and error handling is carried out in SAP S/4HC using the Application Interface Framework (AIF). There are many benefits to using asynchronous in this fashion for system to system communication where S/4HC users may not have access to CPI and/or the source system sending journal entries. For example, in the asynchronous scenario business users would use SAP S/4HC to check on the status of all of the interfaces, take actions to reprocess payload(s), subscribe to events, etc. In a synchronous scenario, in order to monitor the interface for an error someone needs to log into CPI to determine (if) there were errors and the course of action to reprocess them (or receive an email that an interface failed and then log into CPI to troubleshoot). Of course, if the service itself was down CPI log an error in both cases.
Regarding the AIF and Message Monitor capabilities (help documentation), you can find this documented on the business documentation page of the asynchronous journal entry service. Here is the section at the bottom of the page:
Configure Message Monitoring for AIF
In order to use the Message Monitoring capabilities in S/4HC, I assigned my user the business role SAP_BR_CONF_EXPERT_BUS_NET_INT.
Then the FINAC_RET_JOURNALENTRY_IN recipient can be set up for namespace /FINAC
Finally, I assign my user to the recipient.
And can now see the Namespace/Interface in my message dashboard.
Synchronous API Call
First, we’ll execute the iFlow which calls the synchronous API (journalentrycreaterequestconfi) 2 times–one with an error and the other successful. In both cases, the iFlow status shows as “Completed successfully”. The developer would need to take special action to trigger the Exception handling on the iFlow so the iFlow either shows status of Failed or sends an email for a person(s) to take follow up actions.
In this case, I submitted an invalid material number in my payload and you can see the response from SAP S/4HC in CPI monitoring web ui:
Removing the material and resubmitting to the iFlow from Postman results in a successful posting of the Journal Entry.
It’s also worth noting that these are the same payloads received in Postman as well.
Asynchronous API Call
Now, we’ll execute the iFlow that calls the asynchronous API (journalentrybulkcreationreques) 2 times–one with an error and the other successful.
In both cases, the logged response in CPI and Postman are simply the submitted payload:
However, if we look in the Message Dashboard in S/4HC a user can analyze the status of the interface very easily.
Filtered for today’s date, I can see 2 messages were submitted today and 1 was successfully processed while the other did not.
Drilling further into the details, I can see the posting with the invalid material code. From here I could take further action as appropriate.
And of course I can see the successful journal entry posting:
That’s all for the blog. There are other pros/cons to synchronous vs asynchronous depending on the scenario but hope you found this blog helpful.
Best Regards,
Marty
Great explanation and examples. Thanks Marty!
Hello Marty,
That’s a great blog to explore when someone wants to design custom APIs for S4HANA Cloud. However I am working on a scenario (2EJ) where I am stuck in a situation where my invoice is not getting posted in S4Cloud with below error. And most frustrating thing is AIF does not allow me to edit and reprocess (all roles assigned). I am not able to close the loop of document flow. No idea if the std. integration content has issues. Please have a look at the error and share your opinion.
Also I would suggest one correction. The Business Role for AIF is BR_CONF_EXPERT_BUS_NET_INT.
Many Thanks
Amitabh
Hello Marty,
I just wondered why the two iFlows as you mentioned are identical except for the service they call in SAP S/4HC.
Shouldn’t the reply in synchronized iFlow be sent back to the sender? Because the synchronous scenario is suppose to send the reply back to the sender.
thanks,
Yang
Hello marty,
maybe you can answer this question, i cannot figure it out: how is an outbound api triggered?
i want to implement outbound delivery notification, to send the delivery note to the transportation agent. In s4hc i can process the salesorder and create the delivery. Now i do not understand, when would he api be triggered? I would have thought, that upon creeating the delivery note, the api is triggered and will send the data to cpi and this will do the job.
Can you pleae help me here to understand how outbound iflows can be triggered.
thank yoi,
johannes
Hi @marty.mccormick,
can we call OData Post Calls in Async Mode ?
i know that SOAP Calls are async in nature and will obviously be called in Async Mode.
thanks
Santhosh
Hi Santhosh
It's not possible to call an OData API in "async mode" as the response is returned to the calling client as part of the request/response.
Thanks,
Marty
Thank You Marty McCormick for your quick response.
thanks
Santhosh | https://blogs.sap.com/2018/08/26/example-of-synchronous-vs-asynchronous-apis-in-sap-s4hana-cloud/ | CC-MAIN-2022-21 | refinedweb | 1,312 | 60.65 |
Linux Sound Woes
- UniquelyNamed
Hi, I'm new to using Qt and I've been having a lot of success in getting it to work and do what I want.... then I decided to try adding sounds to my program and now a day and a half later I'm still unable to load and play a simple wav or mp3 file.
I've tried using qsound (Error decoding source) and I've tried phonon (Qt4.8) and the media player (Qt5.5) and I've waded through a world of backend issues and missing services and failure to load the installed streamer (yes I have the dev package installed too). Short of rebuilding everything from source (which seems like it shouldn't have to come to that, why have a linux installer if critical features like sound are broken as a result of using it). Regardless of following every forum post and hint I could find online to try and solve this I am still unable to play any sounds what so ever.
I find this massively disappointing. Why shouldn't sound work 'out of the box'. I mean comon... pygame can play sounds right off... and a platform as major as linux should have support for playing sound. So now I've come to the point of frustration where I'm actually taking the time to write this out. So my question: "What exactly do I have to do to get a simple .wav or .mp3 file to play in either a Qt 4 or 5 application on linux?". For how many people are seem to be having this exact same issue there doesn't seem to be much in the way of a solution since there's a lot of people asking, very few answers and a lot of "fixes" that fix nothing.
SO FRUSTRATED!
- Chris Kawa Moderators
Hi, welcome to the forum.
Maybe a silly question, but are you sure the path to the file is correct? One of the reasons "error decoding sound" message shows up is when the source file is not found.
Anyway, I'm not a Linux guy, but I have a Ubuntu VM so I just tried this (Qt 5.5.1 from the online installer):
#include <QApplication> #include <QPushButton> #include <QSound> int main(int argc, char *argv[]) { QApplication a(argc, argv); QPushButton button; button.show(); QObject::connect(&button, &QPushButton::clicked, []{ QSound::play("./test.wav"); }); return a.exec(); }
I didn't install anything extra, No additional setup. Just compiled and it run and played the sound "out of the box". | https://forum.qt.io/topic/67913/linux-sound-woes | CC-MAIN-2018-34 | refinedweb | 429 | 72.16 |
i
i have my code so far
i forgot to add that the game must be two to four players so if anyone can help me with that i've tried to do that with the deck array there is no betting, checking, etc., only checking to see which hand wins and i need something that allows the game to be 2 to 4 players
would it be better to put a multidimensional array for the two to four players and the cards in the base class(card) or derived class(poker)?
would i need a toString method and equals method, accessor, mutator methods, constuctors?
note: this is the base class
the code doesn't exactly work, if you have any suggestions on how to make it work i would greatly appreciate it
also if you have any suggestions for the derived class(poker class) i would really appreciate it
thank you for any help
1. import java.util.Random; 2. 3. public class Card_Class 4. { 5. private int rank; 6. private int suit; 7. 8. public void makeCards() 9. { 10. //Class invariant: Deck of cards, shuffles deck, names ranks and suits 11. 12. int [][] deck = new int[52][a];// deck of cards 13. 14. for (int i = 0; i < 52; i++)//make a new deck 15. for(int j = 0; j < i.length; j++) 16. { 17. deck[i] = i; 18. } 19. 20. Random generating = new Random();//create object of Random 21. 22. for (int i = 0; i < 100; i++)// shuffle the deck by swapping random two cards 100 times 23. for(int j = 0; j < i.length; j++) 24. { 25. int randPos1 = generating.nextInt(52);//generate two numbers 26. int randPos2 = generating.nextInt(52); 27. 28. int temp = deck[randPos1];//swap the two numbers to shuffle the deck 29. deck[randPos1] = deck[randPos2]; 30. deck[randPos2] = temp; 31. } 32. 33. int rank = 0; 34. int suit = 0; 35. String output = null; 36. 37. for (int i = 0; i < 5; i++) 38. for(int j = 0; j < i.length; j++) 39. { 40. rank = (deck[i] % 13) + 1; 41. suit = deck[i] / 13; 42. 43. if (rank == 1) 44. output = "Ace of "; 45. else if (rank == 11) 46. output = "Jack of "; 47. else if (rank == 12) 48. output = "Queen of "; 49. else if (rank == 13) 50. output = "King of "; 51. else 52. output = rank + " of "; 53. 54. if (suit == 0) 55. output += "Spades"; 56. else if (suit == 1) 57. output += "Clubs"; 58. else if (suit == 2) 59. output += "Hearts"; 60. else 61. output += "Diamonds"; 62. System.out.println(output); 63. } 64. } 65. }
This post has been edited by java23: 25 May 2009 - 06:26 AM | http://www.dreamincode.net/forums/topic/106872-using-inheritance-for-a-poker-game/ | CC-MAIN-2016-40 | refinedweb | 444 | 88.33 |
Carsten Ziegeler wrote:
> While cforms is excellent for developing form based applications, it has
> some (minor) problems if Cocoon is used in a portal environment, being
> it the Cocoon portal or Cocoon used as a JSR 168 portlet in a different
> portal container.
> One of the problems is the uniqueness of ids in the resulting html: each
> form and input field gets an id based on it's definition in the model.
> While this works perfectly in a standalone environment this might lead
> to problems with portals: the portlet may be displayed more than once or
> the generated ids might interfere with ids generated by other portlets.
> Therefore a portal usually provides kind of a namespace for each
> portlet. Fortunately, CForms based applications can make use of such
> namespaces if you set the id of the form to this namespace. This will
> then lead to unique ids as the id of the form is prefixed to each input
> field (the mechanism is in fact more general).
> The only drawback is that you have to specify this id in the model. And
> this means that the model is dynamic as each portlet instance must have
> an own model as each instance must have a different id for the form.
> This works but of course the form model is never cached.
>
> To make cforms more portal friendly I suggest to extend the form
> creating a little bit by:
> - adding a createInstance(String formId) method to the FormDefinition
> - adding createForm(source, String formId) methods to the FormManager
> - adding a Form(formDefinition, formId) method to form.js
>
> WDYT? Or is there a better solution?
>
What about simply adding Form.setId(String formId)?
Sylvain
--
Sylvain Wallez
Apache Software Foundation Member | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200603.mbox/%3C4426AEB4.60207@apache.org%3E | CC-MAIN-2015-06 | refinedweb | 285 | 58.21 |
Remnant 0 Posted June 5, 2008 Hi guys, I'm attempting to access to information in an email sent as an attchment. I have been using a script to access the body of an email for a while now, but due to a chnage in process out of my control, I am now recieving the emails i need to access as attachments, and I can't seem to access the body property of these. What I have been using is: $myoutlook = Objget("","outlook.application") $myNameSpace = $myoutlook.GetNamespace("MAPI") $notices = $mynamespace.getdefaultfolder(6).folders("notices") for $i = 1 to $notices.items.count $body = $notices.items($i).body ; Do stuff with the body text next I now need to access the attachments of the email, and I can't seem to treat them as an email (even though they are emails) $notices.items(1).attchments(1).body does not work Any ideas? Share this post Link to post Share on other sites | https://www.autoitscript.com/forum/topic/72972-outlook-attachment-automation/ | CC-MAIN-2018-39 | refinedweb | 160 | 74.29 |
Static Linking still expects dll
Hi.
First of all I am no making/linking expert, never worked on big projects. I want to use OpenCV 245 with Visual C++ 2010 Express on Windows 7 (64 bit) just for reading/writing images (*.tiff, *.jpg, ...) to/from arrays. Manipulation on this arrays will be self-made for educational purposes.
I followed the setup tutorial without setting the environmental variables (I want to write the full path manually).
- "...\opencv\build\include" as additional include folder (otherwise than told in the tutorial where I think there is an error with this path).
- "...\opencv\build\x86\vc10\lib" as additional lib folder
- "opencv_core245d.lib" and "opencv_highgui245d.lib" as additional dependencies
- code contains "#include<opencv\cv.h> #include<opencv\highgui.h> using namespace cv" and in the main function "Mat img; img = imread ("test.tiff", CV_LOAD_IMAGE_COLOR);"
The building process works, but on startup the program expects the opencv_core245d.dll.
Why is this? I also tried the vc10\staticlib folder (with libjpgd.lib, libtiffd.lib, and all others), there were so many linking errors. Whats the difference between the vc10\lib and staticlib files? And whats the difference between the opencv and opencv2 include files?
Thanks for any help! | https://answers.opencv.org/question/13808/static-linking-still-expects-dll/ | CC-MAIN-2021-43 | refinedweb | 202 | 60.82 |
Section 5.1
Objects, Instance Methods, and Instance Variables
OBJECT-ORIENTED PROGRAMMING (OOP) represents an attempt to make programs more closely model the way people think about and deal with the world. In the older styles of programming, a programmer who is faced with some problem must identify a computing task that needs to be performed in order to solve the problem. Programming then consists of finding a sequence of instructions that will accomplish that task. But at the heart of object-oriented programming, instead of tasks we find.7, it didn't seem to make much difference: We just left the word "static" out of the subroutine definitions!
I have said that classes "describe" objects, or more exactly that the non-static portions of classes describe objects. But it's probably not very clear what this means. The more usual terminology is to say that objects belong to classes, but this might not be much clearer. (There is a real shortage of English words to properly distinguish all the concepts involved. An object certainly doesn't "belong" to a class in the same way that a member variable "belongs" to a class.) From the point of view of programming, it is more exact to say that classes are used to create objects. A class is a kind of factory for constructing objects. The non-static parts of the class specify, or describe, what variables and subroutines the objects will contain. This is part of the explanation of how objects differ from classes: Objects are created and destroyed as the program runs, and there can be many objects with the same structure, if they are created using the same class.
Consider a simple class whose job is to group together a few static member variables. For example, the following class could be used to store information about the person who is using the program:class UserData { static String name; static int age; }
In a program that uses this class, there is only one copy of each variable in the class, UserData.name and UserData.age. The class, UserData, and the variables it contains exist as long as the program runs. of those objects will have its own variables called name and age. A program might use this class to store information about multiple players in a game. Each player has a name and an age. When a player joins the game, a new PlayerData object can be created to represent that player. If a player leaves the game, the PlayerData object that represents that player can be destroyed. A system of objects in the program is being used to dynamically model what is happening in the game. You can't do this with "static" variables!
In Section 3.7,, for subroutines in objects, what instance methods objects subroutine. By the way, static member variables and static member subroutines in a class are sometimes called class variables and class methods, since they belong to the class itself, rather than to instances of that class. This terminology is most useful when the class contains both static and non-static members.
So far, I've been talking mostly in generalities, and I haven't given you much idea what you have to put in a program if you want to work with objects. Let's look at a specific example to see how it works. Consider this extremely simplified version of a Student class, which could be used to store information about students taking a course:class Student { String name; // Student's name. double test1, test2, test3; // Grades on three tests. double getAverage() { // compute average test grade return (test1 + test2 + test3) / 3; } } // end of class Student
None of the members of this class are declared to be static, so the class exists only for creating objects. This class definition says that any object that is an instance of the Student class will include instance variables named name, test1, test2, and test3, and it will include an instance method named getAverage(). The names and tests in different objects will generally have different values. When called for a particular student, the method getAverage() will compute an average using that student's test grades. Different students can have different averages. (Again, this is what it means to say that an instance method belongs to an individual object, not to the class.)
In Java, a class is a type, similar to the built-in types such as int and boolean. So, a class name can be used to specify the type of a variable in a declaration statement, the type of a formal parameter, or the return type of a function. For example, a program could define a variable named std of type Student with the statement
Student std; class type, the computer uses the reference in the variable to find the actual object.
Objects are actually created by an operator called new, which creates an object and returns a reference to that object. For example, assuming that std is a variable of type Student, declared as above, the assignment statement
std = new Student();
would create a new object which is an instance of the class Student, and it would store a reference to that object in the variable std. The value of the variable is a reference to the object, not the object itself. It is not quite true, then, to say that the object is the "value of the variable std" (though sometimes it is hard to avoid using this terminology). It is certainly not at all true to say that the object is "stored in the variable std." The proper terminology is that "the variable std refers to the object," and I will try to stick to that terminology as much as possible.
So, suppose that the variable std refers to an object belonging to the class Student. That object has instance variables name, test1, test2, and test3. These instance variables can be referred to as std.name, std.test1, std.test2, and std.test3. For example, a program might include the linesSystem.out.println("Hello, " + std.name + ". Your test grades are:"); System.out.println(std.test1); System.out.println(std.test2); System.out.println(std.test3);
This would output the name and test grades from the object to which std refers. Similarly, std can be used to call the getAverage() instance method in the object by saying std.getAverage(). To print out the student's average, you could say:System.out.println( "Your average is " + std.getAverage() );
More generally, you could use std.name any place where a variable of type String is legal. You can use it in expressions. You can assign a value to it. You can even use it to call subroutines from the String class. For example, std.name.length() is the number of characters in the student's name., std1, // Declare four variables of std2, std3; // type Student. std = new Student(); // Create a new object belonging // to the class Student, and // store a reference to that // object in the variable std. std1 = new Student(); // Create a second Student object // and store a reference to // it in the variable std1. std2 = std1; // Copy the reference value in std1 // into the variable std2. std3 = null; // Store a null reference in the // variable std3. std.name = "John Smith"; // Set values of some instance variables. std1.name = "Mary Jones"; // (Other instance variables have default // initial values of zero.)
After the computer executes these statements, the situation in the computer's memory looks like this:
This picture shows variables as little boxes, labeled with the names of the variables. Objects are shown as boxes with round corners. When a variable contains a reference to an object, the value of that variable is shown as an arrow pointing to the object. The variable std3, with a value of null, doesn't point anywhere. The arrows from std1 and std2 both point to the same object. This illustrates a Very Important Point:
When one object variable is assigned
to another, only a reference is copied.
The object referred to is not copied.
When the assignment "std2 = std1;" was executed, no new object was created. Instead, std2 is set to refer to the same object that std1 refers to. This has some consequences that might be surprising. For example, std1.name and std2.name refer to exactly the same variable, namely the instance variable in the object that both std1 and std2 refer to. After the string "Mary Jones" is assigned to the variable std1.name, it is also be true that the value of std2.name is "Mary Jones". There is a potential for a lot of confusion here, but you can help protect yourself from it if you keep telling yourself, "The object is not in the variable. The variable just holds a pointer to the object."
You can test objects for equality and inequality using the operators == and !=, but here again, the semantics are different from what you are used to. When you make a test "if (std1 == std2)", you are testing whether the values stored in std1 and std2 are the same. But the values are references to objects, not objects. So, you are testing whether std1 and std2 refer to the same object, that is, whether they point to the same location in memory. This is fine, if its what you want to do. But sometimes, what you want to check is whether the instance variables in the objects have the same values. To do that, you would need to ask whether "std1.test1 == std2.test1 && std1.test2 == std2.test2 && std1.test3 == std3.test1 && std1.name.equals(std2.name)"
I've remarked previously that Strings are objects, and I've shown the strings "Mary Jones" and "John Smith" as objects in the above illustration. A variable of type String can only hold a reference to a string, not the string itself. It could also hold the value null, meaning that it does not refer to any string at all. This explains why using the == operator to test strings for equality is not a good idea. Suppose that greeting is a variable of type String, and that the string it refers to is "Hello". Then would the test greeting == "Hello" be true? Well, maybe, maybe not. The variable greeting and the String literal "Hello" each refer to a string that contains the characters H-e-l-l-o. But the strings could still be different objects, that just happen to contain the same characters. The function greeting.equals("Hello") tests whether greeting and "Hello" contain the same characters, which is almost certainly the question you want to ask. The expression greeting == "Hello" tests whether greeting and "Hello" contain the same characters stored in the same memory location.
The fact that variables hold references to objects, not objects themselves, has a couple of other consequences that you should be aware of. They follow logically, if you just keep in mind the basic fact that the object is not stored in the variable. The object is somewhere else; the variable points to it.
Suppose that a variable that refers to an object is declared to be final. This means that the value stored in the variable can never be changed, once the variable has been initialized. The value stored in the variable is a reference to the object. So the variable will continue to refer to the same object as long as the variable exists. However, this does not prevent the data in the object from changing. The variable is final, not the object. It's perfectly legal to sayfinal Student stu = new Student(); stu.name = "John Doe"; // Change data in the object; // The value stored in stu is not changed.
Next, suppose that obj is a variable that refers to an object. Let's consider at what happens when obj is passed as an actual parameter to a subroutine. The value of obj is assigned to a formal parameter in the subroutine, and the subroutine is executed. The subroutine has no power to change the value stored in the variable, obj. It only has a copy of that value. However, that value is a reference to an object. Since the subroutine has a reference to the object, it can change the data stored in the object. After the subroutine ends, obj still points to the same object, but the data stored in the object might have changed. Suppose x is a variable of type int and stu is a variable of type Student. Compare:void dontChange(int z) { void change(Student s) { z = 42; s.name = "Fred"; } } The lines: The lines: x = 17; stu.name = "Jane"; dontChange(x); change(stu); System.out.println(x); System.out.println(stu.name); output the value 17. output the value "Fred". The value of x is not The value of stu is not changed by the subroutine, changed, but stu.name is. which is equivalent to This is equivalent to z = x; s = stu; z = 42; s.name = "Fred";
[ Next Section | Previous Chapter | Chapter Index | Main Index ] | http://math.hws.edu/eck/cs124/javanotes3/c5/s1.html | CC-MAIN-2013-20 | refinedweb | 2,180 | 73.07 |
User Name:
Published: 08 Oct 2010
By: Xianzhong Zhu
This article will serve as an elementary tutorial to help you quickly get started with the lightweight and open sourced dependency injector for .NET applications - Ninject.
The main reason driving me to write this article is due to Balder - the famous 3D engine targeting
Silverlight game. As you may image, in the infrastructure of the Balder engine, Ninject is widely used. In
fact, in developing modern and large-scaled applications, especially underlining architectures, to create a
loosely-coupled, highly-cohesive, and flexible ones, some dependency inject frameworks are usually required
to come to help. Ninject is one of these; it mainly targets .NET based C# area.
So, this article will
serve as an elementary tutorial to help you quickly get started with the lightweight and open sourced
dependency injector for .NET applications - Ninject. And hence, we'll not touch anything upon Silverlight or
ASP.NET MVC, but merely on the simplest Console samples.
The development environments and tools we'll use in these series of articles
are:
1. Windows XP Professional (SP3);
2. .NET 3.5 (currently Ninject 2.0 does not support .NET
4.0);
3. Visual Studio 2010;
4. Ninject 2.0 for .NET 3.5
();
Design patterns, especially structural ones, often aim to resolve dependencies between objects, changing
the dependence from upon the specific to on the abstract. In normal development, if we find clients rely on
an object, we will usually abstract them to form abstract classes or interfaces, so that the clients can get
rid of the dependence of the specific types.
In fact, something in the above process has been ignored -
who bears the responsibility to choose the specific types related to the abstract ones required by the
client? You will find, at most times, to create some types of models can more elegantly solve this problem.
But another problem arises: what if your design is not the specific business logic, but the public library or
framework of the program? At this time, you are a "service side," not that you call those structures, but
they send you the abstract type. How to pass the client the processed abstract type is another
matter.
Next, let's consider a simple example.
Suppose the client application needs an object
that can provide the System.DateTime type. After that, whenever it is needed, it only needs to abstract the
year part from it. So, the initial implementation may look like the following.
Later, for some reason, the developer finds .NET Framework comes with a date type precision
that is not enough, and needs to provide other sources of TimeProvider, to ensure accuracy in the different
functional modules with different TimeProviders. Thus, questions focus on the changes of the TimeProviders
that will affect client applications; but, in fact, the clients only require using the abstract method to
obtain the current time. To do this, we can add an abstract interface to ensure the clients depend only on
the interface TimeProvider. Because this part of the client application needs only accurate to the year, so
it can use a type called SystemTimeProvider. The new implementation is shown below.
This seems the client-side follow-up treatments all depend on the abstract SystemTimeProvider.
And, the problem seems solved? No, it should also know the specific ITimeProvider. Therefore, you also need
to add an object, which will choose a method of an instance of ITimeProvider passed to the client
application; the object can be called Assembler. The new structure is shown in Figure 1 below.
In conclusion, the Assembler's responsibilities are as
follows:
Below is one of the possible implementations of the Assemble.
Up till this point, you should no doubt realize the important role of the Assembler above. In
fact, as you may have doped out, Ninject just plays the role of the above Assembler and, of course, far more
than that.
Next, let's turn to the really interesting topic.
There are already several dependency injection libraries available in .NET circles. Then, why select
Ninject?
First, most of other frameworks heavily rely on XML configuration files to instruct the
framework on resolve the related components of your application. This may bring numerous
disadvantages.
Second, most of the other frameworks often require you to add several assembly
dependencies to your project, which can possibly result in framework bloat. However, Ninject aims to keep
things simple and straightforward.
Third, by introducing an extremely powerful and flexible form of
binding -contextual binding, Ninject can be aware of the binding environment, and can change the
implementation for a given service during activation.
Ninject 2.0 supports three patterns for injection: constructor injection, property injection, and method
injection. Next, let me introduce them by related examples one by one.
The most frequently-used type should be constructor injection. When activating an instance of a type,
Ninject will select one of the type's contructors by applying the following rules in order.
[Inject]
Next, let's take a related example.
In the above code, the kernel plays the core role in the application. In most cases, we just
need to create an instance of the class StandardKernel with zero or more instances of the module
classes as the parameters. For this, please refer to the following definition of the constructor of the class
StandardKernel.
Next, to request an instance of a type (mostly abstract, such as an interface or abstract
class) from Ninject, we just need to call one of the related Get() extension methods. Here we
used the following:
Get()
As for locating the corresponding concrete instance to be used inside the instance in the class
Soldier, this is accomplished in the module WarriorModule through various kinds of Bind<T>
().To<T>() methods.
Bind<T>
().To<T>()
Please notice that only one constructor can be marked with an
[Inject] attribute. Putting an [Inject] attribute on more than one constructor will
result in Ninject throwing a NotSupportedException at the first time an instance of the type is
requested.
[Inject]
Moreover, in Ninject 2.0 the [Inject] attribute can be left off completely.
So, most of your code doesn't need to be aware of the existence of Ninject. And, therefore, you won't need to
reference the Ninject related namespace/libraries, which is also most useful in the case that we cannot have
access to the target source code but still require injecting dependencies into it.
Next, let's look at another kind of injection – property
injection.
To make a property injected you must annotate it with [Inject]. Unlike the constructor
injection rule, you can decorate multiple properties with an [Inject] attribute. However, the
order of property injection is not deterministic, which means there is no way to tell in which order Ninject
will inject their values in. Let's look at an example.
[Inject
In the above code, the property Weapon of the class Soldier is decorated with the annotation
[Inject]. So, in the Main function (the client-side application) by defining the bold line,
wherever the IWeapon interface appears it is identified as the concrete class
Sword.
What's more, the module is not defined as the previous sample. In fact, in Ninject 2.0,
the modules become optional – we can directly register bindings (still using the Bind<T>
().To<T>() methods) directly on the kernel.
Of course, the running-time result is easy to
image. It is: Killed the foemen using Sword.
How about the method inject? It's also easy to
understand.
The following gives the method injection related key code.
As you've seen, the method Arm is annotated with [Inject] in the Soldier class.
So later, by invoking kernel.Bind<IWeapon>().To<Gun>(); where the interface IWeapon
(usually called a service type) is depended upon ( in the method Arm ), the concrete class Gun (also named an
implementation type) is injected.
kernel.Bind<IWeapon>().To<Gun>();
Also, above we've added the unbind test. This is more easily
implemented by calling kernel.Unbind<IWeapon>();. After that, the service type IWeapon
becomes another concrete class – Sword.
kernel.Unbind<IWeapon
Now, let's, as usual, look at the running-time snapshot, as
shown in Figure 3 below.
With Ninject, the type bindings are typically collected into groups called modules, each of which
represents an independent part of the application. To create modules, you just need to implement the
INinjectModule interface. But, for simplicity, under most circumstances we should just extend the
NinjectModule class. You can create as many modules as you'd like, and pass them all to the kernel's
constructor. See the sample sketch below.
That's it. Starting from the next section, let's continue to discuss some more advanced
samples.
What is a provider? Why do we mention it? Virtually, as you've already seen, the type binding plays a
significant role in Ninject. Rather than simply casting from one type to another, bindings are actually from
a service type to a provider. A provider is an object that can create instances of a specified type, which is
in some degree like a factory.
In Ninject 2.0, there are three built-in providers (defined in the
namespace Ninject.Activation.Providers): CallbackProvider, ConstantProvider, and StandardProvider, the most
important of which should be StandardProvider. In the previous example concerning the binding from IWeapon to
Sword, we in fact declared a binding from IWeapon to StandardProvider. All of the other related stuffs happen
behind the scenes; we at most time do not care about it. However, sometimes you might want to do some sort of
complex custom initialization to your objects, rather than letting Ninject work its magic. In this case, you
can create your own provider and bind directly to it. Let's create a related example, as shown in Listing 9
below.
Please notice the bold lines in the above code. First, we've defined a generic custom provider
named SwordProvider<T>. In fact, the really important member is the CreateInstance
method. However, for simplicity, we did nothing special but return the Value property. The second
part of the bold lines is the most vital. As you no doubt have doped out, the functionality equals to the
following (at least in this sample):
CreateInstance
Anyway, the custom provider do provide us a free place to do some sort of complex custom
initialization to our objects.
A lighter weight alternative to writing IProvider implementations is to bind a service to a delegate
method. Listing 10 gives a related sample.
The most interesting part should be the bold line. Here, I want to leave it to you to examine
the details.
Finally, let's switch to the most important binding – contextual binding related
topic.
One of the more powerful (and complex) features of Ninject is its contextual binding system. Up until this
point, we've only been talking about default bindings - bindings that are used unconditionally, in any
context.
In this section, we are going to look at the contextual binding. First of all, please check
out the code below.
The above code gives three cases related to contextual binding. Please notice the bold lines
inside the three modules. In the ToConstantServiceModule module, we achieve the target of binding the service
(the interface ITester here) to the specified constant value. The second module SingletonServiceModule
defines a typical Singleton case. Note, In Ninject 2.0, the annotation [Singleton] is no more
supported; instead, it suggests using the following new syntax:
[Singleton]
Here, the class IocTester is defined as a Singleton.
Finally, in the third module
WithConstructorArgumentServiceModule, we defined a binding from ITester to IocTester, but required the
argument (in this case logger) of the constructor of the class IocTester should be overridden with the
specified value (an instance of the class FlatFileLogger here).
OK, there are still several more
contextual binding defined in Ninject. With these bindings, you can bring Ninject's power and flexibility
into full play. But the rest depond on you readers to continue digging.
As addressed previously, the motive drives me to write this article mainly results from the Balder engine.
In my experience, to gain a thorough understanding with the engine, you have to make clear most of the
details inside the Ninject framework. As you've known, the architecture design of the Balder engine depends
heavily upon Ninject. In the previous samples, I've only show you the textbox styled routine to utilize
Ninject, while in fact you can arm yourself with all you learn to reorganize your targets. Figure 4 indicates
the class diagram for the PlatformKernel class in Balder, which greatly enhances and simplifies the role of
StandKernal in Ninject.
Next, based upon the custom PlatformKernel class, Balder broke the above textbook-styled way to use
Ninject 2.0 in its own way. Listing 12 below gives one of the most important binding definitions in the the
PlatformKernel class.
As said above, in Ninject 2.0 we can now register bindings directly on the kernel; modules
become optional. The above code also proves this, doesn't it?
Well, now you may also have understood
why I mentioned the Balder engine - to master Ninject and to put it into your real development, this article
is far from the real requirements – it just scratches the surface of it.
As pointed out at the beginning, this article mainly aims to serve as one of the most elementary tutorials
concerning the Ninject framework. In fact, there are far more than those covered here. So, in your later
application development, such as those related to WPF, Silverlight, ASP.NET MVC, etc. you should do more
research into Ninject. And it's not until then can it become your own Swiss army knife in your | http://dotnetslackers.com/articles/csharp/Get-Started-with-Ninject-2-0-in-C-sharp-Programming.aspx | CC-MAIN-2014-52 | refinedweb | 2,286 | 56.55 |
**fixed - see comment**
In Yelp, to authenticate with my API key I need to add to the header of the request:
Authorization: Bearer <YOUR API KEY>
So my backend code looks like this:
import {fetch} from 'wix-fetch'; const key = "xxxxxxxxx"; //my API key Authorization: Bearer key; let url; export function yelp(bizId) { url = "" + bizId; return fetch (url, { method: 'get' }) .then( (httpResponse) => { if (httpResponse.ok) { return httpResponse.json(); } }) .catch( (err) => { return err; }); }
but I'm getting a Parsing Error: Unexpected Token key on
Authorization: Bearer key
I tried to research if I'm using the "Authorization: Bearer" incorrectly, but this seems to be the right way. Any idea why I'm getting this Parsing Error?
Wix has a special syntax for the header inside the fetch function:
by using the
headers: {
"": ""
},
syntax I was able to fix the parsing issue.
More on this in the documentation for the fetch function. | https://www.wix.com/corvid/forum/community-discussion/parsing-error-when-using-authorization-bearer-key | CC-MAIN-2019-47 | refinedweb | 150 | 59.23 |
Generating Unique Application IDs in .NET
Many times in an application, you need to generate a unique ID. In a previous article I discussed generating a fingerprint for the purposes of identifying a web or network client of our application. Today, I'll go one step further and show methods for generating a unique ID so that your application can identify itself.
Why Use a Unique ID?
Unique IDs are not just useful for identifying people using your app. Trust is very often a two way street, and just as you want to be sure that the client using your app is recognized by some method, clients often need to trust the application they are dealing with.
In the case of some desktop software, you might want to be sure that the application is running on the same computer that it was originally installed. This is good for the purposes of anti piracy or to simply identify that the application is running on an authorized workstation.
Whatever the reason, there are more than a handful of ways to generate an ID that your app can use based on facilities available on the actual system and or hardware your application is running on.
What Methods Are Available?
The methods that are available to use depend entirely on what systems you are running on. For example, the finger printing code I showed in a previous article could be applied both ways, if you're a web application you could use the same items of data available to generate an ID that you then give back to the client, which the client can then use to ID you.
The problem with this is the same as for using it to ID the client. Should anything change on the client, then you might find yourself in a situation where the client no longer trusts you, because of something that happened at their end.
From your own side of the equation, you can simply just generate a GUID. .NET does in fact have a built in data type for this, so it's as easy as just creating a new GUID and using it as you see fit.
What do you do if you want to tie it to a physical system, namely the system your application is currently running on? In this case the first place to look is the same place that Microsoft looks to generate operating system ID's to protect against piracy, and that's at the hardware.
What Hardware Can Be Used?
When it comes to what hardware can be used, what is the truth? All of it.
Seriously.
What you actually end up using however depends very much on how much work you’re willing to put into generating your ID. For instance, most CPUs since the original first generation Pentium one (and even some before going as far back as the 486) introduced something called the 'CPUID instruction'. This is a simple single machine code instruction that returns a unique ID and CPU type burned into every CPU at the manufacturing level.
Unfortunately, from a Windows perspective, to access this would generally mean using unmanaged code, and breaking out to raw assembler to get at it. Not an issue for most developers who are used to doing mixed mode dev using c++ and .NET, but a little out of reach for the average web application programmer used to doing ASP.NET MVC and/or WPF.
Don't let the above scare you though, there's one item of hardware that's very easy to get access to from general application code, and that's the various networking interfaces available.
If you open up a command prompt by typing 'cmd' into your run box, then run 'ipconfig' by typing the following into the command window and pressing return:
ipconfig /all
You should get quite a screen full of text, amongst which you should see something like this:
Ethernet adapter LAN: Connection-specific DNS Suffix . : xxxxxxxxxxxxxx.xxxxx. Description . . . . . . . . . . . : Marvell Yukon 88E8056 PCI-E Gigabit Ethernet Controller Physical Address. . . . . . . . . : 00-00-00-00-00-00 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : ffff::fff:ffff:ffff:ffff%10(Preferred) IPv4 Address. . . . . . . . . . . : xxx.xxx.xxx.xxx(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : xxx.xxx.xxx.xxx DHCPv6 IAID . . . . . . . . . . . : 0000000000 DHCPv6 Client DUID. . . . . . . . : 11-11-11-11-11-11-11-11-11-11-11-11-11-11 DNS Servers . . . . . . . . . . . : xxx.xxx.xxx.xxx NetBIOS over Tcpip. . . . . . . . : Enabled
I've censored some of the details above just to protect the systems I'm typing this on, but you should in general get a similar output.
In the output above you should be able to see there's an item called 'Physical Address'; this is the item we’re interested in. Every network interface, no matter what its IP address (or ATM Token, Pipe address, Hub ID, etc.) has a physical hardware address, usually burned into the chips on the device itself. This address is more commonly referred to as the 'MAC Address' and is 99% of the time unique to the device it's being used on.
Why only 99%?
In recent times, we've started building virtual network devices, and devices that have soft settings for their MAC Address, enabling them to be changed to a user supplied value. For our purposes however, most of the time you'll not find this value gets changed (or spoofed as it's more commonly known) unless the user of the machine is up to something they shouldn't be.
Also, another benefit to this is that anything that uses any kind of network style interface will have to generate a MAC Address.
This often means that you can look for things like bluetooth adapters, wifi dongles and even tethered mobile phones. Many VM systems such as 'VM Ware' and 'Virtual Box' also install pseudo network adapters that show up as a legitimate interface, so you could actually generate an ID that ties your software to the fact that one of these products are installed on the platform in question.
How Do I Read a MAC Address?
I’ve covered enough theory. At this point, you likely want to know how to read a MAC Address. Like anything in software development, there are several ways to achieve this; for the purposes of this post however, I'll just show you the easiest.
.NET has an object in the 'System.Net' namespace called 'NetworkInformation', inside the network information class exists a static method that will obtain all the network information for anything in the system that presents a network like interface.
Calling this static function, will provide you with an array of 'NetworkInterface' objects, which you can then further interrogate to get the information you need.
In our case, something like the following:
using System; using System.Collections.Generic; using System.Linq; using System.Net.NetworkInformation; namespace MyApp { public class Class1 { public string GetMacAddress() { string result = string.Empty; using(List<NetworkInterface> nics = NetworkInterface.GetAllNetworkInterfaces().ToList()) { result = nics.Select(nic => nic.GetPhysicalAddress()).FirstOrDefault(mac => !string.IsNullOrEmpty(mac.ToString())).ToString(); } return result; } } }
What we’re doing here is to get the array of 'NetworkInterface' objects available, convert it to a List<T>, then 'Using' that list (so that we garbage collect it correctly afterwards) we use Linq to Objects to select all, then find the first one in that selection with a non null, or non empty MAC Address.
The result is a string representing the first non empty mac/physical address present in the system the app is running on.
You could further refine this by using a further check on the type and/or description of the interface object, to only look for certain adapters.
From Here...
In a future post, we'll look at other ways you might be able to use hardware attached to a machine such as hard drive serial numbers and smart card readers, in the mean time you might want to explore the interface object, and see what other items are available that you might be able to use.
If you have any ideas for posts for this series, please reach out to me on twitter '@shawty_ds' or come and find me in the Linked .NET users group (Lidnug) on the Linked-In social network and I'll do my best to include them.
There are no comments yet. Be the first to comment! | http://www.codeguru.com/columns/dotnet/generating-unique-application-ids-in-.net.htm | CC-MAIN-2015-22 | refinedweb | 1,408 | 61.87 |
I'm a python newbie, currently in the process of moving parts of my work from Matlab to python + Numeric. When using python interactively, I often need something like the Matlab "whos" command. In Matlab "whos" returns a list of all variables: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >> a = 4; >> b = [1 2 3; 4 5 6]; >> whos Name Size Bytes Class a 1x1 8 double array b 2x3 48 double array Grand total is 7 elements using 56 bytes %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% is there any way of doing something similar in python? A recent post by Michael Chermside, in the thread "How to get the 'name' of an int" suggests the following for the slightly different task of listing names and values: for name, value in locals().iteritems(): print "%s = %s" % (name, value) any suggestions of how I could change this to name, type? On a related note, the Matlab commands "clear all; pack" clears all variables in the current namespace and starts garbage collection. Is there a corresponding python idiom? -- thomas knudsen - kms, copenhagen, denmark - | https://mail.python.org/pipermail/python-list/2003-March/205708.html | CC-MAIN-2016-30 | refinedweb | 168 | 67.28 |
DFS from source, no extra data types. solution in Clear category for Node Disconnected Users by JamesArruda
def visible_nodes(curr, grid, off, viz):
for a, b in grid:
if a == curr and b not in off:
viz.add(b)
visible_nodes(b, grid, off, viz)
def disconnected_users(net, users, source, crushes):
viz = set(source)
if source in crushes:
return sum(users.values())
visible_nodes(source, net, crushes, viz)
return sum(users[x] for x in users if x not in viz)
if __name__ == '__main__':
#These "asserts" using only for self-checking and not necessary for auto-testing
assert disconnected_users([
['A', 'B'],
['B', 'C'],
['C', 'D']
], {
'A': 10,
'B': 20,
'C': 30,
'D': 40
},
'A', ['C']) == 70, "First"
assert disconnected_users([
['A', 'B'],
['B', 'D'],
['A', 'C'],
['C', 'D']
], {
'A': 10,
'B': 0,
'C': 0,
'D': 40
},
'A', ['B']) == 0, "Second"
assert disconnected_users([
['A', 'B'],
['A', 'C'],
['A', 'D'],
['A', 'E'],
['A', 'F']
], {
'A': 10,
'B': 10,
'C': 10,
'D': 10,
'E': 10,
'F': 10
},
'C', ['A']) == 50, "Third"
print('Done. Try to check now. There are a lot of other tests')
Sept. 14, 2018
Forum
Price
Global Activity
ClassRoom Manager
Leaderboard
Coding games
Python programming for beginners | https://py.checkio.org/mission/node-disconnected-users2/publications/JamesArruda/python-3/dfs-from-source-no-extra-data-types/share/4ad1e6d336144bd2c6236c236e87feed/ | CC-MAIN-2021-17 | refinedweb | 194 | 62.31 |
The Monad.Reader/Issue2/FunWithLinearImplicitParameters
From HaskellWiki
1 Introduction
The following sections provide a short introduction into the various concepts our implementation uses. The code presented here is no longer available as an attachment. It has however been successfully tested with ghc-6.2.2 and ghc-6.4.
1.1 Shift and ResetThe
reset (1 + shift (\k -> k 1 + k 2)) :: Int
--
1.2 Monadic ReflectionMonadic reflection [4] enables us to write monadic code in direct style.
> reify (reflect [0,2] + reflect [0,1]) :: [Int]
and
> liftM2 (+) [0,2] [0,1]
Explicitly (but hiding the wrapping necessary for the ContT monad transformer), Wadler's transformation is as follows
embed :: Monad m => m a -> (forall r. (a -> m r) -> m r) embed m = \k -> k =<< m project :: Monad m => (forall r. (a -> m r) -> m r) -> m a project f = f return
reflect m = shift (\k -> k =<< m) reify t = reset (return t)
Now let us have a closer look at the above example to see how it works operationally.
e = reify (reflect [0,2] + reflect [0,1])
Substituting the definitions, this becomes
e = reset (return (shift (\k -> k =<< [0,2]) + shift (\k -> k =<< [0,1])))
which simplifies to
e = reset [shift (\k -> k 0 ++ k 2) + shift (\k' -> k' 0 ++ k' 1)]
k = \x -> reset [x + shift (\k' -> k' 0 ++ k' 1)]
k = \x -> [x + 0] ++ [x + 1] = \x -> [x,x+1]
Therefore, as we expected, the whole expression evaluates to
e = k 0 ++ k 2 = [0,1] ++ [2,3] = [0,1,2,3]
1.3 Implicit Parameters
Implicit parameters [7] work very much like regular implicit parameters, but the type of the parameter is required to be an instance of the class
Possible uses are random number distribution, fresh name generation (if you donot mind the names becoming very long) or a direct-style [6]. In this article, they will be used to store a subcontinuation from an enclosing.
1.4 Unsafe Operations
The code below uses two unsafe operations Cite error: Closing </ref> missing for <ref> tag
infixr 0 `deepSeq`, $!! class DeepSeq a where deepSeq :: a -> b -> b ($!!) :: (DeepSeq a) => (a -> b) -> a -> b f $!! x = x `deepSeq` f x
2 Implementation
This section discusses the implementation of the monadic reflection library. It safely be skipped, especially the first two subsections are very technical.
2.1 Basic Declarations
2.2 Shift and Reset
2.3 Reflection and ReificationWith working
2.3.1 Reflecting the Cont Monad
reflectCont :: Cont r a -> Direct r a reflectCont (Cont f) = shift f reifyCont :: DeepSeq r => Direct r a -> Cont r a reifyCont e = Cont $ \k -> reset (k e)
callCC' :: DeepSeq r => ((a -> b) -> Direct r a) -> Direct r a callCC' f = reflectCont $ callCC $ \c -> reifyCont $ f $ reflectCont . c
callCC' :: ((forall b. a -> b) -> Direct r a) -> Direct r a callCC' f = shift $ \k -> k $ f (\x -> shift $ \_ -> k x)
In both versions, the expression
reset (callCC' (\k x -> k (x+)) 5) :: Int
2.3.2 Reflecting Arbitrary MonadsNow, implementing
--)
3 Interface
For quick reference, we repeat the type signatures of the most important library functions.
4 Resolving Ambiguities
The use of linear implicit parameters comes with a few surprises. The GHC manual [7] even writes.
4.1 Recursive FunctionsIndeed, omitting a type signature can sometimes result in a different behavior. Consider the following code, where
-- Without the explicit signature for k GHC does not infer a -- sufficiently general type. down 0 = [] down (n+1) = shift (\(k::Int -> [Int]) -> k n): down n * Main> reset (down 4) [3,3,3,3] -- wrong!
down' :: Int -> Direct [Int] [Int] {- ... -} * Main> reset (down' 4) [3,2,1,0] -- right!
4.2 Higher order functions
Implicit parameters are particularly tricky when functions using implicit parameters are passed to higher order functions. Consider the following example.
--!
map' :: (a -> Direct r b) -> [a] -> Direct r [b] {- Implementation as above -} foo = reify (map' f [1,2,3]) where {- ... -} * Main> foo [[-1,-2,-3],[-1,-2,3],[-1,2,-3],[-1,2,3],[1,-2,-3],[1,-2,3],[1,2,-3], [1,2,3]] -- right!
4.3 The Monomorphism Restriction
What should the expression
reify (let x = reflect [0,1] in [x,x+2,x+4])
*:
5 ExamplesWe now present some examples reflecting the
5.1 Lazy EvaluationThe use of monads in Haskell models an impure language with call-by-value semantics. This is not surprising as one motivation for the use of monads is the need to do IO. For IO, evaluation order is important and call-by-value makes evaluation order easier to reason about. For the
But such a lazy monadic behavior would be practical for other monads, too: The list monad is very susceptible to space leaks and unnecessary recomputation. The reflected list monad, however, is often closer to the desired behavior, as the following examples suggest.
--.
5.2 Filtering Permutations
As a typical problem where the lazy behavior of our implementation is advantageous, we consider a small combinatorial example: Find all permutations of
$(1,2,4,...,2^{n-1})$
such that all the sums of the initial sequences of the permutations are primes.
-- [9],]]
6 Further Ideas
This section discusses some further directions in which the ideas of this article might be extended.
6.1 Denotational SemanticsT the interpreter is built upon is an
newtype Eval s a = Eval { runEval :: ContT Int (ST s) a } deriving (Functor, Monad)
The interpreter maps the source language's expressions into the following universal type.
type U s = Eval s (Ref s `Either` U' s) data U' s = Int { runInt :: Int } | Fun { runFun :: U s -> U s } | List { runList :: Maybe (U s, U s) } newtype Ref s = Ref { unRef :: STRef s (U' s `Either` U s) }
--.
6.2 isomorphisms of type
As we showed in this article, Haskell's type system is almost ready to express these differences on the type level; the only remaining problem is that forall-hoisting )
reflect :: m a -> (<m> => a) reify :: Monad m => (<m> => a) -> m a foo :: <[]> => Int foo = reflect [0,2] + reflect [0,1] bar :: [Int] bar = reify foo
more strictly into.
7 ConclusionDo not take this too seriously: Our code heavily relies on unsafe and experimental features; time and space usage are increased by the suboptimal encoding of continuations and the recomputations; and the number of supported monads is limited by the
8.
9 References
- ↑ Olivier Danvy and Andrzej Filinski. "A Functional Abstraction of Typed Contexts". DIKU. DIKU Rapport 89/12. July 1989. Available online:
- ↑ Chung-chieh Shan. "Shift to Control". 2004 Scheme Workshop. September 2004. Available online:
- ↑ R. Kent Dybvig, Simon Peyton-Jones, and Amr Sabry. "A Monadic Framework for Subcontinuations". February 2005. Available online:
- ↑ Andrzej Filinski. Representing monads. In Conference Record of POPL '94: 21st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Portland, Oregon, pages 446--457. Available online:
- ↑ Philip Wadler. "The essence of functional programming". Invited talk, 19'th Symposium on Principles of Programming Languages, ACM Press. January 1992. Available online:
- ↑ 6.0 6.1 Koen Claessen and John Hughes. "!QuickCheck: An Automatic Testing Tool for Haskell".
- ↑ 7.0 7.1 7.2 The GHC Team. "The Glorious Glasgow Haskell Compilation System User's Guide, Version 6.4". BR Linear Implicit Parameters: BR Implicit Parameters: BR Forall-Hoisting:
- ↑ Thomas J‰ger "Linear implicit parameters: linearity not enforced". Mailing list post.
- ↑ Michael Hanus [editor] "Curry. An Integrated Functional Logic Language". Available online:
- ↑ "Monadification as a Refactoring". | https://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue2/FunWithLinearImplicitParameters&redirect=no | CC-MAIN-2016-30 | refinedweb | 1,231 | 55.64 |
user
fuser [options] [files | filesystems]
Identifies and outputs the process IDs of processes that are using the files or local filesystems. Each process ID is followed by a letter code: c if process is using file as the current directory; e if executable; f if an open file; m if a shared library; and r if the root directory. Any user with permission to read /dev/kmem and /dev/mem can use fuser, but only a privileged user can terminate another user's process. fuser does not work on remote (NFS) files.
If more than one group of files is specified, the options may be respecified for each additional group of files. A lone dash (-) cancels the options currently in force, and the new set of options applies to the next group of files. Like a number of other administrator commands, fuser is usually installed to the /sbin directory. You may need to add that directory to your path or execute the command as /sbin/fuser.
Options
Return all options to defaults.
Send signal instead of SIGKILL.
Display information on all specified files, even if they are not being accessed by any processes.
Request user confirmation to kill a process. Ignored if -k is not also specified.
Send SIGKILL signal to each process.
List signal names.
Expect files to exist on a mounted filesystem; include all files accessing that filesystem.
Set the namespace checked for usage. Acceptable values are file for files, udp for local UPD ports, and tcp for local TCP ports.
Silent.
User login name, in parentheses, also follows process ID.
Verbose.
Display version information.
More Linux resources from O'Reilly >> | http://oreilly.com/linux/command-directory/cmd.csp?path=f/fuser | CC-MAIN-2013-48 | refinedweb | 273 | 66.94 |
J.
If you would like to receive an email when updates are made to this post, please register here
RSS
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Mark pointed out a mistake in the first section about multi-targetting. Indeed you can't query objects with LINQ against 2.0 because the extension attribute and extension methods are in System.Core.dll.
I really just thought there needed to be a clarification. You can query objects, but only if they have the standard query operators defined directly on them.
You're right, but I don't think people will find it that useful because the basic collections (array, list, etc) won't have those methods defined. At one point, I believe you could define your own ExtensionAttribute outside of System.Core.dll but it looks like this isn't possible in beta2.
You mention the var keyword as being one of the features you consider most useful. Don't you think this makes code less readable? Yes, you can simply hover over a variable and obtain the type information you need but then that means while reading through code I need to continually keep hovering over the variables in order to truly understand the code. What does var really buy you, a few keystrokes? I'll take readability over a few more typed characters anyday.
Dear vardoubter,
I tend to use var in cases when the type would be redundant. Consider for example,
Dictionary<string, string> myDictionary = new Dictionary<string, string>();
Compared to:
var myDictionary = new Dictionary<string, string>();
I find the second to be more readable because the type is named only once. Also, I think its more maintainable because I only need to change the type in one spot.
I initially had trepidation about 'var' and extension methods for exactly the same reason as you mention. Having used the features for a year now, however, I think its fairly hard to unintentionally abuse either at the expense of readability.
Jomo
Thanks for this. Very handy to know indeed! Expression trees are great!
vardoubter,
var is crucial for LINQ. The results/type returned from a query is anonymous; var makes it easy to use the result of a LINQ query without having to know its type. If you didn't have var LINQ would be really painful.
Here's a great overview of the goodies we can look forward to
You can use linq in .net2, just copy System.Core.dll to your Bin or GAC
There are several good new blogs from members of the community team. Nevertheless, the most important
There are several good new blogs from members of the Microsoft C# team. Nevertheless, the most important
Check this post for anonymous Types
As you can probably tell from the title of my last few posts I've been doing some work with LINQ over
So using the Target Framework option can one build the same project for mutiple frameworks just by chaning the framework, then build?
Then make an installer?
Jomo Fisher--A few weeks ago, a fellow C# programmer asked me what the biggest differences between programming
I've revisited this recently, and discovered that you CAN indeed declare your own ExtensionAttribute in beta 2 and still use extension methods while targetting v2.0. It just needs to be in the System.Runtime.CompilerServices namespace, and all will be happy. | http://blogs.msdn.com/jomo_fisher/archive/2007/07/23/the-least-you-need-to-know-about-c-3-0.aspx | crawl-002 | refinedweb | 569 | 63.8 |
from IPython.display import YouTubeVideo YouTubeVideo('GlcnxUlrtek')
Last time, we decided to use gradient descent to train our Neural Network, so it could make better predictions of your score on a test based on how many hours you slept, and how many hours you studied the night before. To perform gradient descent, we need an equation and some code for our gradient, dJ/dW.
Our weights, W, are spread across two matrices, W1 and W2. We’ll separate our dJ/dW computation in the same way, by computing dJdW1 and dJdW2 independently. We should have just as many gradient values as weight values, so when we’re done, our matrices dJdW1 and dJdW2 will be the same size as W1 and W2.
Let’s work on dJdW2 first. The sum in our cost function adds the error from each example to create our overall cost. We’ll take advantage of the sum rule in differentiation, which says that the derivative of the sums equals the sum of the derivatives. We can move our sigma outside and just worry about the derivative of the inside expression first.
To keep things simple, we’ll temporarily forget about our summation. Once we’ve computed dJdW for a single example, we’ll add all our individual derivative terms together.
We can now evaluate our derivative. The power rule tells us to bring down our exponent, 2, and multiply. To finish our derivative, we’ll need to apply the chain rule.
The chain rule tells us how to take the derivative of a function inside of a function, and generally says we take the derivative of the outside function and then multiply it by the derivative of the inside function.
One way to express the chain rule is as the product of derivatives, this will come in very handy as we progress through backpropagation. In fact, a better name for backpropagation might be "don’t stop doing the chain rule. ever."
We’ve taken the derivative of the outside of our cost function - now we need to multiply it by the derivative of the inside.
Y is just our test scores, which won’t change, so the derivative of y, a constant, with respect to W two is 0! yHat, on the other hand, does change with respect to W two, so we’ll apply the chain rule and multiply our results by minus dYhat/dW2.
We now need to think about the derivative of yHat with respect to W2. Equation 4 tells us that yHat is our activation function of z3, so we it will be helpful to apply the chain rule again to break dyHat/dW2 into dyHat/dz3 times dz3/dW2.
To find the rate of change of yHat with respect to z3, we need to differentiate our sigmoid activation function with respect to z.
Now is a good time to add a new python method for the derivative of our sigmoid function, sigmoid Prime. Our derivative should be the largest where our sigmoid function is the steepest, at the value z equals zero.
%pylab inline #Import code from last time from partTwo import *
Populating the interactive namespace from numpy and matplotlib
def sigmoid(z): #Apply sigmoid activation function to scalar, vector, or matrix return 1/(1+np.exp(-z))
def sigmoidPrime(z): #Derivative of sigmoid function return np.exp(-z)/((1+np.exp(-z))**2)
testValues = np.arange(-5,5,0.01) plot(testValues, sigmoid(testValues), linewidth=2) plot(testValues, sigmoidPrime(testValues), linewidth=2) grid(1) legend(['sigmoid', 'sigmoidPrime'])
<matplotlib.legend.Legend at 0x1ee27a92390>
We can now replace dyHat/dz3 with f prime of z 3.
Our final piece of the puzzle is dz3dW2, this term represents the change of z, our third layer activity, with respect to the weights in the second layer.
Z three is the matrix product of our activities, a two, and our weights, w two. The activities from layer two are multiplied by their correspond weights and added together to yield z3. If we focus on a single synapse for a moment, we see a simple linear relationship between W and z, where a is the slope. So for each synapse, dz/dW(2) is just the activation, a on that synapse!
Another way to think about what the calculus is doing here is that it is “backpropagating” the error to each weight, by multiplying by the activity on each synapses, the weights that contribute more to the error will have larger activations, and yield larger dJ/dW2 values, and those weights will be changed more when we perform gradient descent.
We need to be careful with our dimensionality here, and if we’re clever, we can take care of that summation we got rid of earlier.
The first part of our equation, y minus yHat is of the same dimension as our output data, 3 by 1.
F prime of z three is of the same size, 3 by 1, and our first operation is scalar multiplication. Our resulting 3 by 1 matrix is referred to as the backpropagating error, delta 3.
We determined that dz3/dW2 is equal to the activity of each synapse. Each value in delta 3 needs to be multiplied by each activity. We can achieve this by transposing a2 and matrix multiplying by delta3.
What’s cool here is that the matrix multiplication also takes care of our earlier omission – it adds up the dJ/dW terms across all our examples.
Another way to think about what’s happening here is that is that each example our algorithm sees has a certain cost and a certain gradient. The gradient with respect to each example pulls our gradient descent algorithm in a certain direction. It's like every example gets a vote on which way is downhill, and when we perform batch gradient descent we just add together everyone’s vote, call it downhill, and move in that direction.
We’ll code up our gradients in python in a new method, cost function prime. Numpy’s multiply method performs element-wise multiplication, and the dot method performs matrix multiplication.
# Part of NN Class (won't work alone, needs to be included in class as # shown in below and in partFour.py):)
We have one final term to compute: dJ/dW1. The derivation begins the same way, computing the derivative through our final layer: first dJ/dyHat, then dyHat/dz3, and we called these two taken together form our backpropagating error, delta3. We now take the derivative “across” our synapses, this is a little different from out job last time, computing the derivative with respect to the weights on our synapses.
There’s still a nice linear relationship along each synapse, but now we’re interested in the rate of change of z(3) with respect to a(2). Now the slope is just equal to the weight value for that synapse. We can achieve this mathematically by multiplying by W(2) transpose.
Our next term to work on is da(2)/dz(2) – this step is just like the derivative across our layer 3 neurons, so we can just multiply by f prime(z2).
Our final computation here is dz2/dW1. This is very similar to our dz3/dW2 computation, there is a simple linear relationship on the synapses between z2 and w1, in this case though, the slope is the input value, X. We can use the same technique as last time by multiplying by X transpose, effectively applying the derivative and adding our dJ/dW1’s together across all our examples.
Or:
Where:
All that’s left is to code this equation up in python. What’s cool here is that if we want to make a deeper neural network, we could just stack a bunch of these operations together.
# Whole Class with additions:
So how should we change our W’s to decrease our cost? We can now compute dJ/dW, which tells us which way is uphill in our 9 dimensional optimization space.
NN = Neural_Network()
cost1 = NN.costFunction(X,y)
dJdW1, dJdW2 = NN.costFunctionPrime(X,y)
dJdW1
array([[-9.02802415e-03, -2.65075383e-03, 1.82516520e-03], [-1.41980752e-03, -8.44055638e-04, -4.69372384e-06]])
dJdW2
array([[-0.01186352], [-0.01336392], [-0.00239456]])
If we move this way by adding a scalar times our derivative to our weights, our cost will increase, and if we do the opposite, subtract our gradient from our weights, we will move downhill and reduce our cost. This simple step downhill is the core of gradient descent and a key part of how even very sophisticated learning algorithms are trained.
scalar = 3 NN.W1 = NN.W1 + scalar*dJdW1 NN.W2 = NN.W2 + scalar*dJdW2 cost2 = NN.costFunction(X,y)
print(cost1, cost2)
0.008160624162630813 0.00953953507158469
dJdW1, dJdW2 = NN.costFunctionPrime(X,y) NN.W1 = NN.W1 - scalar*dJdW1 NN.W2 = NN.W2 - scalar*dJdW2 cost3 = NN.costFunction(X, y)
print(cost2, cost3)
0.00953953507158469 0.007923819041724567
Next time we’ll perform numerical gradient checking check to make sure our math is correct. | https://nbviewer.jupyter.org/github/stephencwelch/Neural-Networks-Demystified/blob/master/Part%204%20Backpropagation.ipynb | CC-MAIN-2020-45 | refinedweb | 1,511 | 62.27 |
Is there a convenient way to convert a vector to a list?
If not, is there a convenient way to sort the items in a vector?
Thanks in advance.
Printable View
Is there a convenient way to convert a vector to a list?
If not, is there a convenient way to sort the items in a vector?
Thanks in advance.
Hmmm, not sure what convenient means. But you can use the STL algorithm sort() to sort a vector.
[edit]
Forget first question. Don't know if you mean this, but an easy algorithm: Take the first element of a vector and add it to a list, repeat this until all elements of the vector are processed.
[/edit]
Can you provide me with a short example?
I've tried:
and I get an error statingand I get an error statingCode:
vector<int>v;
v.push_back(7);
v.push_back(5);
v.push_back(6);
v.sort();
"sort not a memeber of vector<int, allocater<int>>"
or something like that.
Algorithm sort() is not a member of the vector template class.
Code:
#include <vector>
#include <algorithm>
vector<int> v;
v.push_back(7);
v.push_back(5);
v.push_back(6);
sort(v.begin(), v.end());
Is it possible to
sort(v.end(), v.begin()); ?
I get no compile error when I try it, but no results either.
You get no compiler error, the algorithm sort expects to iterators for template class vector and gets them, so that is not wrong. But the function requires an iterator to the first element in range as first argument and an iterator to the last element as second argument, so I think that is why it doesn't work.
For the sort(v.end(),v.begin()); question, I am guessing you want to sort in descending order rather than ascending order. The default sorting method used by the sort function is to go through the elements from start to finish comparing the elements using the '<' operator. If you wish to override this, you can supply an optional third argument to the sort function which is a function that will compare two elements of the vector according to whatever criteria you wish. For example, the following will sort the vector in reverse order:
Code:
#include <vector>
#include <algorithm>
using namespace std;
bool MySort(const int& lhs,const int& rhs)
{
return lhs > rhs; // By itself, the sort algorithm will effectively use
// lhs < rhs instead
}
int main()
{
vector<int> v;
v.push_back(7);
v.push_back(5);
v.push_back(6);
sort( v.begin(), v.end(), MySort );
return 0;
} | http://cboard.cprogramming.com/cplusplus-programming/31898-vector-list-printable-thread.html | CC-MAIN-2015-18 | refinedweb | 424 | 65.01 |
This section contains the detail about property file, it's use and how can we read it via our Java code.
A property file is used to store an application's configuration parameters. Except the configuration parameter storage, it is also used for internationalization and localization for storing strings written in a different languages.
Property file store pair of string as key-value pair. Key store the name of the parameter and value stores the value. Generally, each line in the property file is used to store one property.
Each line of property file can be written in following format :
Here is the video tutorial of: "What is properties file in Java?"
For comment, we use # and ! sign in front or start of the line. See Example below :
# This is a comment !This is also a comment name=Roseindia.
If your value of key is large and you need next line to continue, you can do it using back slash as given below.
# This is a comment name=Roseindia message = Welcome to \ Roseindia!
If your key contains spaces, you can write it as given below :
# Key with spaces key\ containing\ spaces = This key will be read as "Key containing spaces".
The # and ! sign is used as comment but when it is part of the a key's value it has no effect on the key, see below. Also you can write Unicode as provided below:
#No effect on back slash website = # Unicode tab : \u0009
You can read the key-value pair string of property file using Java code as given below :
The property file is given below :
#This is a Sample Property File Name = Roseindia Portal = Message = Welcome To Roseindia !!!
The Java code to read this property file is given below :
import java.io.FileInputStream; import java.io.IOException; import java.util.*; public class ReadPropertyFile { public static void main(String args[]) { try { Properties pro = new Properties(); FileInputStream in = new FileInputStream("C:/Roseindia.properties"); pro.load(in); // getting values from property file String name = pro.getProperty("Name"); String portal = pro.getProperty("Portal"); String message = pro.getProperty("Message"); // Printing Values fetched System.out.println(name); System.out.println(portal); System.out.println(message); } catch (IOException e) { System.out.println("Error is:" + e.getMessage()); e.printStackTrace(); } } } Property File
Post your Comment | http://roseindia.net/java/example/java/util/JavaPropertyFile.shtml | CC-MAIN-2015-48 | refinedweb | 378 | 50.23 |
Tutorial: Deploy an image classification model in Azure Container Instances
APPLIES TO:
Basic edition
Enterprise edition (Upgrade to Enterprise edition)
This tutorial is part two of a two-part tutorial series. In the previous tutorial, you trained machine learning models and then registered a model in your workspace on the cloud.
Now you're ready to deploy the model as a web service in Azure Container Instances. A web service is an image, in this case a Docker image. It encapsulates the scoring logic and the model itself.
In this part of the tutorial, you use Azure Machine Learning for the following tasks:
- Set up your testing environment.
- Retrieve the model from your workspace.
- Test the model locally.
- Deploy the model to Container Instances.
- Test the deployed model.
Container Instances is a great solution for testing and understanding the workflow. For scalable production deployments, consider using Azure Kubernetes Service. For more information, see how to deploy and where.
Note
Code in this article was tested with Azure Machine Learning SDK version 1.0.41.
Prerequisites
To run the notebook, first complete the model training in Tutorial (part 1): Train an image classification model. Then open the img-classification-part2-deploy.ipynb notebook in your cloned tutorials folder.
This tutorial is also available on GitHub if you wish to use it on your own local environment. Make sure you have installed
matplotlib and
scikit-learn in your environment.
Important
The rest of this article contains the same content as you see in the notebook.
Switch to the Jupyter notebook now if you want to read along as you run the code. To run a single code cell in a notebook, click the code cell and hit Shift+Enter. Or, run the entire notebook by choosing Run all from the top toolbar.
Set up the environment
Start by setting up a testing environment.
Import packages
Import the Python packages needed for this tutorial:
%matplotlib inline import numpy as np import matplotlib import matplotlib.pyplot as plt import azureml from azureml.core import Workspace, Run # display the core SDK version number print("Azure ML SDK Version: ", azureml.core.VERSION)
Retrieve the model
You registered a model in your workspace in the previous tutorial. Now load this workspace and download the model to your local directory:
from azureml.core import Workspace from azureml.core.model import Model import os ws = Workspace.from_config() model = Model(ws, 'sklearn_mnist') model.download(target_dir=os.getcwd(), exist_ok=True) # verify the downloaded model file file_path = os.path.join(os.getcwd(), "sklearn_mnist_model.pkl") os.stat(file_path)
Test the model locally
Before you deploy, make sure your model is working locally:
- Load test data.
- Predict test data.
- Examine the confusion matrix.
Load test data
Load the test data from the ./data/ directory created during the training tutorial:
from utils import load_data import os data_folder = os.path.join(os.getcwd(), 'data') # note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the neural network converge faster X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0 y_test = load_data(os.path.join( data_folder, 'test-labels.gz'), True).reshape(-1)
Predict test data
To get predictions, feed the test dataset to the model:
import pickle from sklearn.externals import joblib clf = joblib.load(os.path.join(os.getcwd(), 'sklearn_mnist_model.pkl')) y_hat = clf.predict(X_test)
Examine the confusion matrix
Generate a confusion matrix to see how many samples from the test set are classified correctly. Notice the misclassified value for the incorrect predictions:
from sklearn.metrics import confusion_matrix conf_mx = confusion_matrix(y_test, y_hat) print(conf_mx) print('Overall accuracy:', np.average(y_hat == y_test))
The output shows the confusion matrix:
[[ 960 0 1 2 1 5 6 3 1 1] [ 0 1112 3 1 0 1 5 1 12 0] [ 9 8 920 20 10 4 10 11 37 3] [ 4 0 17 921 2 21 4 12 20 9] [ 1 2 5 3 915 0 10 2 6 38] [ 10 2 0 41 10 770 17 7 28 7] [ 9 3 7 2 6 20 907 1 3 0] [ 2 7 22 5 8 1 1 950 5 27] [ 10 15 5 21 15 27 7 11 851 12] [ 7 8 2 13 32 13 0 24 12 898]] Overall accuracy: 0.9204
Use
matplotlib to display the confusion matrix as a graph. In this graph, the x-axis shows the actual values, and the y-axis shows the predicted values. The color in each grid shows the error rate. The lighter the color, the higher the error rate is. For example, many 5's are misclassified as 3's. So you see a bright grid at (5,3):
# normalize the diagonal cells so that they don't overpower the rest of the cells when visualized row_sums = conf_mx.sum(axis=1, keepdims=True) norm_conf_mx = conf_mx / row_sums np.fill_diagonal(norm_conf_mx, 0) fig = plt.figure(figsize=(8, 5)) ax = fig.add_subplot(111) cax = ax.matshow(norm_conf_mx, cmap=plt.cm.bone) ticks = np.arange(0, 10, 1) ax.set_xticks(ticks) ax.set_yticks(ticks) ax.set_xticklabels(ticks) ax.set_yticklabels(ticks) fig.colorbar(cax) plt.ylabel('true labels', fontsize=14) plt.xlabel('predicted values', fontsize=14) plt.savefig('conf.png') plt.show()
Deploy as a web service
After you tested the model and you're satisfied with the results, deploy the model as a web service hosted in Container Instances.
To build the correct environment for Container Instances, provide the following components:
- A scoring script to show how to use the model.
- An environment file to show what packages need to be installed.
- A configuration file to build the container instance.
- The model you trained previously.
Create scoring script
Create the scoring script, called score.py. The web service call uses this script to show how to use the model.
Include these two required functions in.
%%writefile score.py import json import numpy as np import os import pickle from sklearn.externals import joblib from sklearn.linear_model import LogisticRegression from azureml.core.model import Model def init(): global model # retrieve the path to the model file using the model name model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_mnist_model.pkl') model = joblib.load(model_path) def run(raw_data): data = np.array(json.loads(raw_data)['data']) # make prediction y_hat = model.predict(data) # you can return any data type as long as it is JSON-serializable return y_hat.tolist()
Create environment file
Next create an environment file, called myenv.yml, that specifies all of the script's package dependencies. This file is used to make sure that all of those dependencies are installed in the Docker image. This model needs
scikit-learn and
azureml-sdk:
from azureml.core.conda_dependencies import CondaDependencies myenv = CondaDependencies() myenv.add_conda_package("scikit-learn") with open("myenv.yml", "w") as f: f.write(myenv.serialize_to_string())
Review the content of the
myenv.yml file:
with open("myenv.yml", "r") as f: print(f.read())
Create a configuration file
Create a deployment configuration file. Specify the number of CPUs and gigabytes of RAM needed for your Container Instances container. Although it depends on your model, the default of one core and 1 gigabyte of RAM is sufficient for many models. If you need more later, you have to re-create Container Instances
The estimated time to finish deployment is about seven to eight minutes.
Configure the image and deploy. The following code goes through these steps:
- Build an image by using these files:
- The scoring file,
score.py.
- The environment file,
myenv.yml.
- The model file.
- Register the image under the workspace.
- Send the image to the Container Instances container.
- Start up a container in Container Instances by using the image.
- Get the web service HTTP endpoint.
%%time from azureml.core.webservice import Webservice from azureml.core.model import InferenceConfig inference_config = InferenceConfig(runtime= "python", entry_script="score.py", conda_file="myenv.yml") service = Model.deploy(workspace=ws, name='sklearn-mnist-svc', models=[model], inference_config=inference_config, deployment_config=aciconfig) service.wait_for_deployment(show_output=True)
Get the scoring web service's HTTP endpoint, which accepts REST client calls. You can share this endpoint with anyone who wants to test the web service or integrate it into an application:
print(service.scoring_uri)
Test the deployed service
Earlier, you scored all the test data with the local version of the model. Now you can test the deployed model with a random sample of 30 images from the test data.
The following code goes through these steps:
Send the data as a JSON array to the web service hosted in Container Instances.
Use the SDK's
runAPI to invoke the service. You can also make raw calls by using any HTTP tool such as curl.
Print the returned predictions and plot them along with the input images. Red font and inverse image, white on black, is used to highlight the misclassified samples.
Because the model accuracy is high, you might have to run the following code a few times before you can see a misclassified sample:
import json # find 30 random samples from test set n = 30 sample_indices = np.random.permutation(X_test.shape[0])[0:n] test_samples = json.dumps({"data": X_test[sample_indices].tolist()}) test_samples = bytes(test_samples, encoding='utf8') # predict using the deployed model result = service.run(input_data=test_samples) # compare actual value vs. the predicted values: i = 0 plt.figure(figsize=(20, 1)) for s in sample_indices: plt.subplot(1, n, i + 1) plt.axhline('') plt.axvline('') # use different color for misclassified sample font_color = 'red' if y_test[s] != result[i] else 'black' clr_map = plt.cm.gray if y_test[s] != result[i] else plt.cm.Greys plt.text(x=10, y=-10, s=result[i], fontsize=18, color=font_color) plt.imshow(X_test[s].reshape(28, 28), cmap=clr_map) i = i + 1 plt.show()
This result is from one random sample of test images:
You can also send a raw HTTP request to test the web service:
import requests # send a random row from the test set to score random_index = np.random.randint(0, len(X_test)-1) input_data = "{\"data\": [" + str(list(X_test[random_index])) + "]}" headers = {'Content-Type': 'application/json'} # for AKS deployment you'd need to the service key in the header as well # api_key = service.get_key() # headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)} resp = requests.post(service.scoring_uri, input_data, headers=headers) print("POST to url", service.scoring_uri) #print("input data:", input_data) print("label:", y_test[random_index]) print("prediction:", resp.text)
Clean up resources
To keep the resource group and workspace for other tutorials and exploration, you can delete only the Container Instances deployment by using this API call:
service.delete()
Important
The resources you created can be used as prerequisites to other Azure Machine Learning
- Learn about all of the deployment options for Azure Machine Learning.
- Learn how to create clients for the web service.
- Make predictions on large quantities of data asynchronously.
- Monitor your Azure Machine Learning models with Application Insights.
- Try out the automatic algorithm selection tutorial.
Feedback | https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-deploy-models-with-aml | CC-MAIN-2019-51 | refinedweb | 1,810 | 51.75 |
1. Introduction
In this article we will have a look at how to draw using the GDI+ functions available with .Net. GDI stands for Graphical Device Interface and using that you can create rich drawing applications and show useful information on the form as a drawing, say for example showing a pie chart of sales for the past 5 years.
In this article I will show how to draw using Pens and Brushes.
2. What is Pen and Brush?
A Pen is a graphics object that can be used to draw lines. A pen has the properties like the color of the Pen and thickness of the Pen. A Brush is also a graphics object that can be used to paint the region. Suppose you want to fill the area, you can use a Brush. Think about painting a door, a wooden plate and so on.
In this article I will show how to use the Plain Solid Brush, Gradient Brush and Hatch Brush.
3. About the Finished Example
The following is a screenshot of the finished application:
The Black screen is the Panel controlggled second drawing is placed in the using directive as in the following. Note that this is required when we want to use the rich functionalities of GDI+.
//Sample 00: Required Name Spaces
using System.Drawing.Drawing2D;
2. Then a Rectangle object is created and its size and position are filled in object like Panel, Form or even controls like TextBox, listbox and so on. In our case, we asked the Panel Control to retrieve the Graphics object from it. And this object is stored in grp of type Graphics. Once the Graphics object is ready, a Pen is created with a color of Goldenrod, that is a preset color. You can see all the present color by typing Color dot. The following is the code:
//Sample 02: Get the Graphics from the Canvas(Panel) and Create the pen
Graphics grp = DrawingCanvas.CreateGraphics();
Pen pen = new Pen(Color.Goldenrod);
4. Before performing the Drawing (note that we write all the code in the Draw Button click handler), we clear any previously drawing the user feeds various values for the Rectangle. You can learn how Position and Size of Rectangle are used in drawing a rectangle in the following video.
Video 1: Test Run 1 - Drawing the Rectangle - Watch Here
5. The GDI+ Pen
Using the pen object we can define the to specify the color of the pen and this variable will be filled with the user-selected color. The variable thickness specifies the pen thickness when we create the pen object.
//Sample 03: Create the Pen
Color pencolor;
int thickness;
2. Based on the user-selected Pen color using the Radio buttons, the pencolor variable is filled using predefined color values such as the "Color.Pink". The code is below:
//3.1: Decide Pen Color
if (radGolden.Checked == true)
pencolor = Color.PaleGoldenrod;
else
pencolor = Color.Pink;
3. Similarly, the thickness value is also filled by the user-selected line thickness combo boxvalue. The pen thickness in our example has three standard thicknesses, but you can specify any float value to create a pen thickness. The following is the code that stored the pen thickness:
//3.2: Decide Pen Thickness
if (rad1.Checked == true)
thickness = 1;
else if (rad5.Checked == false)
thickness = 3;
thickness = 5;
4. Finally a pen object is created using the pencolor and thickness variables populated previously. This pen object can be supplied to Graphics functions so that the function uses this new pen whenever it performs a line drawing.
Pen pen = new Pen(pencolor, thickness);
The following is some of the Pen and a look at the lines formed by the Rectangle:
Video 2: Creating a Pen - Watch Here
6. The GDI+ Brushes
In this example show various types of Brushes. Imagine a brush that you use to paint the walls of your house. The GDI+ brushes can also be used similar to this. We draw something using the pens that define the outer lines and then paints (using a brush) the region inside the region formed by the pens. But you are not as limited as for a brush that can be used only with pens.
The types of brush that you are going to see here are:
1. Simple Solid Brush
2. Gradient Brush
3. Patterned Brush
The Simple Solid brush fills the color in a plain solid color. The Gradient Brush fills colors between two colors applying the linear transformation from one color to another color. The hatch brushes fills the region with a given pattern. All these types of brushes in effect are shown in the following picture:
OK. Let us see, how do we do it.
1. The following is the CheckedChanged event handler for the checkbox marked as four in the first picture of the article. In this handler we decide to enable or disable the entire GroupBox that belongs to, the solid brush is hidden. In the same way, when the solid brush is displayed the advanced brush is hidden. This is done through the;
}
grpBrushAdv.Visible = false;
grpBrushSolid.Visible = true;
}
4. In the example application screenshot, the control item marked as 7 is a Label Control. This label control is used as the toggle button that alternates between Gradient and Hatch. When the control label shows as Gradient then the user will see controls relevant to Gradient Brush. The same is also true for the Hatch brush. We specify various. The following is the function GetSolidBrushColor_FromUI that gets the color for the solid brush. This function reads the user-selected radio button for the solid brush color and assigns that to the out parameter passed in as color. Note that the out parameter guarantees the caller that the function will definitely assign a value in the color parameter and the caller does not;
color = Color.Blue;
6. Look at the sample application screenshot labeled at 6, the GroupBox names "From" and "To" will be changed to ForeColor and BackColor when the Brush mode changes from Gradient to Hatch. The function shown below will return the color values from the same Radio Buttons that is under these two sets of Radio Groups that change their name based on the brush mode:
//Sample 10: Get Color1 and Color2 from UI
private void Get_Col1_Col2(ref Color color1, ref Color color2)
if (radhatCol1Blue.Checked == true)
color1 = Color.Blue;
color1 = Color.Yellow;
if (radHatCol2Green.Checked == true)
color2 = Color.Green;
color2 = Color.Yellow;
7. The Inflate_Rect function written below will diminish the rectangular dimensions based on the current pen thickness. Note that the function takes the parameter as the reference type so the caller will expect the changes in the supplied rectangular dimensions. So we had looked at some of many drawing functions with a Fill, say for example something like FillRectangle, FillEllipse and so on. In the following code we ensure the Advanced brush option is not selected by the user and then create a SolidBrush by specifying the solid fill color. Once the brush is created it can be used with the Graphics object. In our case we used the solid brush with the FillRectangle function. The following the a Gradient Brush in hand we make a call to the function FillRectangle to get the Gradient effect. The code for this is shown below:
if (lblHatchGr.Text == "Gradient")
//Sample 12.2: Create Gradient Brush and Perform the Fill
Color from_color= Color.White;
Color to_color = Color.White;
Brush Gradient_brush = null;
//B. Create Gradient Brush
Get_Col1_Col2(ref from_color, ref to_color);
Gradient_brush = new LinearGradientBrush(rect, from_color, to_color,
LinearGradientMode.Horizontal);
grp.FillRectangle(Gradient_brush, rect);
In the code above, we asked the gradient to be applied horizontally by specifying "LinearGradientMode.Horizontal". The Gradient Modes are shown below:
10. In the else part of the Gradient Brush section we create the pattern brush to fill the rectangle. To see a pattern we should specify the background color and foreground color when creating the pattern brush. After having these colors (note that we used the same function call "Get_Col1_Col2"), the user-selected hatch pattern is stored in the style object. Refer to the MSDN to know other hatch patterns since there are much more patterns available. All this information is passed to the HatchBrush constructor to create the object Hatch_brush. Then as usual, this hatch brush is passed to the FillRectangle function of the Graphics object. The code is below that constructs the Pattern brush and uses that to fill the rectangle:
//Sample 12.3: Create Hatch brush and perform the Fill;
style = HatchStyle.HorizontalBrick;
//C. Create Pattern Brush
Hatch_brush = new HatchBrush(style, fore_color, back_color);
//D. Perform Drawing
grp.FillRectangle(Hatch_brush, rect);
You will see how the Brush works in the sample from the following shown video.
Video 3: Creating and using Brushes - Watch Here | http://www.c-sharpcorner.com/UploadFile/6897bc/how-pen-and-brush-works-in-gdi/ | CC-MAIN-2014-15 | refinedweb | 1,470 | 65.62 |
I have upgraded my project with New Telerik Version.
1.Deleted the old Telerik.web.ui,skins,designer dll and telerik.web.ui.xml(2015.3.1111.35)
2. Replaced the new version dlls (Telerik.web.ui,skins,xml,designer) in Bin folders.(2020.3.915.45) as we have upgraded the framework to 4.8.
My doubt is how to update RadControls like radeditor.Net2,Radspellchecker,Radwindow,RadAjax etc which is already existing in my project?
Do we need to change any namespaces for the RADControls after Telerik Upgrade ?
or Only Replacing Telerik.web.ui,skins dll to newer version is Fine?
Can anyone please help on this? | https://www.telerik.com/forums/upgarde-telerik-to-new-version | CC-MAIN-2022-33 | refinedweb | 109 | 56.21 |
Surely the most spectacular method to develop and execute logon procedures on Windows Server 2003 is by using the .NET Common Language Runtime. The .NET Framework can be used as a replacement or supplement for Windows Script Host and batch-processing scripts. So how does this work? So far, we have looked at the .NET Framework as a development platform for ASP.NET Web Forms, XML Web Services, and .NET Windows Forms applications. (See Chapter 5.) And how do the previously described script concepts fit into the picture? The answer is very simple: Executing a .NET Framework application using the command prompt is a special variant of a .NET Windows Forms application. .NET Windows Forms applications have their own namespaces in the .NET Class Library. This type of program is named Console Application and has a number of advantages over scripts:
A .NET Console Application is compiled and thus offers advantages of speed over an interpreted script.
The program logic is embedded in a compiled executable and thus helps to protect the developer’s intellectual properties.
It is not easy to change a .NET Console Application without a lot of effort if the source code is unavailable.
How is a console application for Windows Server 2003 created that can be used as logon procedure? Don’t we need a development environment like Microsoft Visual Studio .NET? Naturally, Visual Studio .NET is the preferred development tool for this purpose, but the .NET Framework SDK will work, too.
The .NET Framework SDK can be downloaded for free in different languages from the Microsoft Web site on the Internet. The .NET Framework SDK contains compilers, debugger, help tools, libraries, examples, and a comprehensive documentation—thus, a complete development environment. Installing the .NET Framework SDK is very easy and does not even require selecting a target folder. It does not need to be installed on all target platforms; one development platform is sufficient. The .NET Common Language Runtime for executing the development results is automatically included in each installation of Windows Server 2003.
Listing 7-13 shows the simplest example of a console application written in Visual Basic .NET. And how do you develop a simple .NET console application? First of all, you need to enter the source code—for example, with the Notepad editor. The most difficult decision is which programming language to use. Options primarily include Visual Basic .NET and C#. Visual Basic .NET is easier for administrators who have some experience with Visual Basic. C#, however, is more appealing to C, C++, and Java programmers. The only difference between the two languages is their syntax. They use the exact same .NET library classes.
Class ConsoleApplication Shared Sub main() System.Console.WriteLine(".NET Console Application") End Sub End Class
Listing 7-14 shows the same simple console application written in C#:
Class ConsoleApplication { Static void Main() { System.Console.WriteLine(".NET Console Application") } }
The .NET class library is quite extensive. For this reason, it is not easy in the beginning to gain an overview of the classes that are available. Fortunately, the .NET Framework comes with an excellent documentation, including help files and examples in SDK.
How can a sample application be compiled from source code to an executable program? The command-line compilers handle this task: Vbc.exe for Visual Basic .NET and Csc.exe for C#. In its simplest form, the compiler needs only the name of the source code as an argument. The classes used are included from the Mscorlib.dll default library. The compiler command line at the command prompt looks like this:
Vbc vbConsole.vb
If required classes are located in other library files, they can be referenced through additional compiler arguments. It is also possible to select an alternative output file name if you do not want the file name of the source code to be the name of the executable program. All these options can easily be determined by entering Vbc /? or Csc /?.
A compiled .NET console application is saved in an executable file that can be copied to any platform with the .NET runtime environment. So if you copy the console application to Windows Server 2003 with Terminal Services activated, you can call it up from a script during logon. This might look like this:
@echo off echo Calling a .NET Console Application ConsoleApp.exe
Now that we have laid the foundation for developing a .NET console application, we will briefly look at the namespaces and classes that are relevant for terminal servers.
It would go beyond the scope of this book to describe the possibilities .NET console applications offer. Therefore, I would like to refer you to Charles Petzold’s book, Programming Microsoft Windows with C# (Microsoft Press, 2002). He describes in detail many of the concepts touched on only briefly here, and much more. | http://etutorials.org/Microsoft+Products/microsoft+windows+server+2003+terminal+services/Chapter+7+Scripting/Logon+Procedures+Using+the+.NET+Framework/ | crawl-001 | refinedweb | 801 | 60.31 |
package HackaMol::Roles::SelectionRole; $HackaMol::Roles::SelectionRole::VERSION = '0.051'; #ABSTRACT: Atom selections in molecules use Moose::Role; use HackaMol::AtomGroup; use Carp; my %common_selection = ( 'sidechain' => '$_->record_name eq "ATOM" and not $_->name =~ /^(N|CA|C|O|OXT)$/', 'backbone' => '$_->record_name eq "ATOM" and $_->name =~ /^(N|CA|C|O)$/', # backbone restricted to ATOM to avoid HETATM weirdness, e.g. het cys in 1v1q 'water' => '$_->resname =~ m/HOH|TIP|H2O/ and $_->record_name eq "HETATM"', 'protein' => '$_->record_name eq "ATOM"', 'ligands' => '($_->resname !~ m/HOH|TIP|H2O/ ) and $_->record_name eq "HETATM"', 'metals' => '$_->symbol =~ m/^(Li|Be|Na|Mg|K|Ca|Sc|Ti|V|Cr|Mn|Fe|Co|Ni|Cu|Zn|Rb|Sr|Y|Zr|Nb|Mo|Tc|Ru|Rh|Pd|Ag|Cd|Cs|Ba|La|Ce|Pr|Nd|Pm|Sm|Eu|Gd|Tb|Dy|Ho|Er|Tm|Yb|Lu|Hf|Ta|W|Re|Os|Ir|Pt|Au|Hg)$/', ); has 'selection' => ( traits => ['Hash'], is => 'ro', isa => 'HashRef[Str]', lazy => 1, default => sub { {} }, handles => { get_selection => 'get', set_selection => 'set', has_selection => 'count', keys_selection => 'keys', delete_selection => 'delete', has_selection => 'exists', }, ); has 'selections_cr' => ( traits => ['Hash'], is => 'ro', isa => 'HashRef[CodeRef]', default => sub { {} }, lazy => 1, handles => { get_selection_cr => 'get', set_selection_cr => 'set', has_selections_cr => 'count', keys_selection_cr => 'keys', delete_selection_cr => 'delete', has_selection_cr => 'exists', }, ); sub select_group { my $self = shift; my $selection = shift; my $method; if ($self->has_selection_cr($selection)){ #attr takes priority so user can change $method = $self->get_selection_cr($selection); } elsif ( exists( $common_selection{$selection} ) ) { $method = eval("sub{ grep{ $common_selection{$selection} } \@_ }"); } else { $method = _regex_method($selection); } #grep { &{ sub{ $_%2 } }($_)} 1..10 my $group = HackaMol::AtomGroup->new( atoms => [ &{$method}( $self->all_atoms ) ], ); return ($group); } # $mol->select_group('(chain A .or. (resname TYR .and. chain B)) .and. occ .within. 1') # becomes grep{($_->chain eq A or ($_->resname eq TYR and $_->chain eq 'B')) and $_->occ <= 1.0} sub _regex_method { my $str = shift; # allow and or .and. .or. ... does this cause other problems with names? $str =~ s/\sand\s/ \.and\. /g; $str =~ s/\sor\s/ \.or\. /g; $str =~ s/\snot\s/ \.not\. /g; #print "$str not implemented yet"; return(sub{0}); #my @parenth = $str =~ /(\(([^()]|(?R))*\))/g # ranges resid 1+3-10+20 -> resid =~ /^(1|3|4|5|6|7|8|9|10|20)$/ my @ranges = $str =~ /(\w+\s+(?:\w+|\d+)(?:\+|\-)[^\s]+)/g; foreach my $range (@ranges){ my ($attr,$sel) = split(/\s+/, $range); #$range =~ s/\+/\\+/g; #$range =~ s/\-/\\-/g; my $gsel = join '|',map{/(.+)-(.+)/ ? ($1 .. $2) : $_ } split('\+', $sel ); $str =~ s/\Q$range\E/\$\_->$attr =~ \/^($gsel)\$\//g; } $str =~ s/(\w+)\s+(\d*[A-Za-z]+\d*)/\$\_->$1 eq \'$2\'/g; # resnames must have at least 1 letter $str =~ s/(\w+)\s+(-?\d+)/\$\_->$1 eq $2/g; $str =~ s/(\w+)\s+\.within\.\s+(\d+)/\$\_->$1 <= $2/g; $str =~ s/(\w+)\s+\.beyond\.\s+(\d+)/\$\_->$1 >= $2/g; $str =~ s/$_/\($common_selection{$_}\)/g foreach keys %common_selection; $str =~ s/\.and\./and/g; $str =~ s/\.or\./or/g; $str =~ s/\.not\./not/g; return ( eval("sub{ grep{ $str } \@_ }") ); } no Moose::Role; 1; __END__ =pod =head1 NAME HackaMol::Roles::SelectionRole - Atom selections in molecules =head1 VERSION version 0.051 =head1 DESCRIPTION The goal of HackaMol::Roles::SelectionRole is to simplify atom selections. This role is not loaded with the core; it must be applied as done in the synopsis. The method commonly used is select_group, which uses regular expressions to convert a string argument to construct a method for filtering; a HackaMol::AtomGroup is returned. The select_group method operates on atoms contained within the object to which the role is applied (i.e. $self->all_atoms). The role is envisioned for instances of the HackaMol::Molecule class. =head2 Common Selections: backbone, sidechains, protein, etc. Some common selections are included for convenience: backbone, sidechains, protein, water, ligands, and metals. my $bb = $mol->select_group('backbone'); =head2 Novel selections using strings: e.g. 'chain E', 'Z 8', 'chain E .and. Z 6' Strings are used for novel selections, the simplest selection being the pair of one attribute with one value separated by a space. For example, "chain E" will split the string and return all those that match (atom->chain eq 'E'). my $enzyme = $mol->select_group('chain E'); This will work for any attribute (e.g. atom->Z == 8). This approach requires less perl know-how than the equivalent, my @enzyme_atoms = grep{$_->chain eq 'E'} $mol->all_atoms; my $enzyme = HackaMol::AtomGroup->new(atoms=>[@enzyme_atoms]); More complex selections are also straightforward using the following operators: .or. matches if an atom satisfies either selection (separated by .or.) .and. matches if an atom satisfies both selections (separated by .and.) .within. less than or equal to for numeric attributes .beyond. greater than or equal to for numeric attributes .not. everything but More, such as .around. will be added as needs arise. Let's take a couple of examples. 1. To select all the tyrosines from chain E, my $TYR_E = $mol->select_group('chain E .and. resname TYR'); 2. To choose both chain E and chain I, my $two_chains = $mol->select_group('chain E .or. chain I'); Parenthesis are also supported to allow selection precedence. 3. To select all the tyrosines from chain E along with all the tyrosines from chain I, my $TYR_EI = $mol->select_group('(resname TYR .and. chain E) .or. (resname TYR .and. chain I)'); 4. To select all atoms with occupancies between 0.5 and 0.95, my $occs = $mol->select_group('(occ .within. 0.95) .and. (occ .beyond. 0.5)'); The common selections (protein, water, backbone, sidechains) can also be used in the selections. For example, select chain I but not the chain I water molecules (sometimes the water molecules get the chain id), my $chain_I = $mol->select_group('chain I .and. .not. water'); =head2 Extreme selections using code references. The role also provides the an attribute with hash traits that can be used to create, insanely flexible, selections using code references. As long as the code reference returns a list of atoms, you can do whatever you want. For example, let's define a sidechains selection; the key will be a simple string ("sidechains") and the value will be an anonymous subroutine. For example, $mol->set_selection_cr("my_sidechains" => sub {grep { $_->record_name eq 'ATOM' and not ( $_->name eq 'N' or $_->name eq 'CA' or $_->name eq 'C' or $_->name eq 'Flowers and sausages') } @_ } ); Now $mol->select_group('my_sidechains') will return a group corresponding to the selection defined above. If you were to rename "my_sidechains" to "sidechains", your "sidechains" would be loaded in place of the common selection "sidechains" because of the priority described below in the select_group method. =head1 METHODS =head2 set_selections_cr two arguments: a string and a coderef =head2 select_group takes one argument (string) and returns a HackaMol::AtomGroup object containing the selected atoms. Priority: the select_group method looks at selections_cr first, then the common selections, and finally, if there were no known selections, it passes the argument to be processed using regular expressions. =head1 ATTRIBUTES =head2 selections_cr isa HashRef[CodeRef] that is lazy with public Hash traits. This attribute allows the user to use code references in the atom selections. The list of atoms, contained in the role consuming object, will be passed to the code reference, and a list of atoms is the expected output of the code reference, e.g. @new_atoms = &{$code_ref}(@atoms); =head1 SYNOPSIS # load 2SIC from the the RCSB.org and pull out two groups: the enzyme (chain E) and the inhibitor (chain I) use HackaMol; use Moose::Util qw( ensure_all_roles ); # to apply the role to the molecule object my $mol = HackaMol->new->pdbid_mol("2sic"); #returns HackaMol::Molecule ensure_all_roles($mol, 'HackaMol::Roles::SelectionRole') # now $mol has the select_group method; my $enzyme = $mol->select_group("chain E"); my $inhib = $mol->select_group("chain I"); =head1 WARNING This is still under active development and may change or just not work. I still need to add warnings to help with bad selections. Let me know if you have problems or suggestions! =head1 AUTHOR Demian Riccardi <demianriccardi@gmail.com> =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2017 by Demian Riccardi. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut | http://web-stage.metacpan.org/release/HackaMol/source/lib/HackaMol/Roles/SelectionRole.pm | CC-MAIN-2019-51 | refinedweb | 1,352 | 55.44 |
An alternate design for an Orchestra "page flow" system to that presented in OrchestraDialogPageFlowDesign1.
This design is more complex to configure than the original proposal. One specific benefit of the original was that there was "no new xml configuration format to learn". Instead, there was just a naming convention for view ids.
This proposal does require either components embedded in caller and called pages, or xml configuration files for them. However it gives the benefits of making the data flow between caller and callee much clearer, and allows the dataflow to be checked for correctness at runtime (or even at startup time for the xml configuration file approach). It also is less intrusive; it does not require the user's flow pages to be in a specific directory (which is then visible to the user via the url).
Design principles
Starting a flow
Deciding whether to start a new flow is driven by the navigation outcome returned by action methods (or literal outcomes embedded in the command component). In other words, starting a new flow happens only when a command component is activated by the user. Starting a flow with a GET operation makes no sense, as there is no "caller" page to specify parameters for the called flow, and nowhere obvious for the called flow to place its results.
Flow Transparency
The called flow pages are inserted "transparently" into the caller's sequence of pages. A postback of the caller occurs, and it is just "suspended" at the end of the postback phase. The flow runs, and then eventually a flow postback occurs which ends the flow and "unsuspends" the original postback. The rendering phase for that "original" postback then runs as if the flow had never happened - except that backing bean properties have been changed by the called flow.
Java method call analogy
The mechanism for calling a flow should look as much as possible like calling a java static method. Actually, it should look more like calling a java "service" via a mechanism such as java.util.ServiceLoader or JNDI, where the caller looks up a service by name, then casts it to an expected interface type and invokes a method on the interface using a set of parameter values. Note that the analogy is close but not exact because:
- a callable flow only ever provides one "method" that can be invoked, so "interface type" and "method name" are combined.
- a callable flow can return multiple values, not just one like Java can.
Note however that a called flow should be like a static method in that it has its own set of variables (its parameters and local variables) and only interacts with the caller via the parameters and return values. This makes caller/callee interactions understandable, which would not be the case if a called flow could access any data of the caller. This approach is similar to Trinidad pageflow, where imported/exported data is explicitly declared. Spring WebFlow does something similar.
Therefore:
- Every callable flow declares an "interface type" (what logical function it provides) and what parameters it expects.
- A flow caller uses a navigation outcome to specify which flow it wants to invoke (like looking up a java object by service name). It declares what logical function it expects that flow to implement (like casting a service object to a java interface). And it declares what parameters it is passing to the flow.
- The standard JSF navigation rules are used to find a concrete implementation for the abstract service that the caller wants to invoke; the outcome is the service name and the to-view-id is the concrete flow to invoke. Note that because navigation rules support "from-view-id", this allows flexible control over which outcomes (service names) map to which concrete flow implementations (views).
The "signature" of the concrete flow that is returned by the navigation-case lookup is compared against what the caller expected. It is an error if the called flow does not provide the expected function, or expects incompatible parameters. This is equivalent to the ClassCastException that would occur in java if a service lookup returned an object that did not implement the expected interface.
Configuration Principles
While the information needed to match caller and callee is specific, there are several possible ways that this data can be defined. One is for the caller and callee pages to embed special JSF components that provide this information. Another is for there to be a configuration file "next to" the caller and callee pages which defines this. It is also possible to put at least the caller part of this information into the navigation-case. The first approach (embedded components) will be implemented first. Care is taken in the design to make sure that per-page config files can be implemented at a later date. Other options (navigation-handler etc) are not a priority.
Configuration Examples
Example caller page using embedded components
<f:view> <o:onFlow <param name="x" src="#{caller.x}"/> <param name="y" src="#{caller.y}"/> <return name="z" src="#{caller.z}"/> <onReturn action="#{....}"/> <modifies componentId="id"/> </o:onFlow> <o:onFlow <param name="a" src="#{caller.a}"/> <return name="b" src="#{caller.b}"/> <onReturn action="#{....}"/> <modifies componentId="foo"/> <onFlowBegin>..</onFlowBegin> <onFlowEnd>..</onFlowEnd> </o:onFlow> <h:commandButton
Example callee page using embedded components
<f:view> <o:flow <param name="x" src="#{callee.x}"/> <param name="y" src="#{callee.y}"/> <return name="z" src="#{callee.z}"/> <commitWhen outcome="commit"/> <cancelWhen outcome="cancel"/> </o:flow> <h:commandButton
Example caller with separate per-page config file
Page file "foo/bar.jsp" has matching file "foo/bar.flow.xml"
<flowConfig> <onFlow outcome="doCustSearch" type="com.ops.foo.CustSearch"> <param name="x" src="#{caller.x}"/> <param name="y" src="#{caller.y}"/> <return name="z" src="#{caller.z}"/> <onReturn action="#{....}"/> <modifies componentId="id"/> </onFlow> <onFlow outcome="doAddressSearch" type="com.ops.foo.AddressSearch" mode="modal"> <param name="a" src="#{caller.a}"/> <return name="b" src="#{caller.b}"/> <onReturn action="#{....}"/> <modifies componentId="foo"/> <onFlowBegin>..</onFlowBegin> <onFlowEnd>..</onFlowEnd> </onFlow> </flowConfig>
Example callee with separate per-page config file
Page file "custSearch/search.jsp" has matching file "custSearch/search.flow.xml"
<flowConfig> <flow type="com.ops.foo.CustSearch" onepage="true"> <param name="x" src="#{callee.x}"/> <param name="y" src="#{callee.y}"/> <return name="z" src="#{callee.z}"/> <commitWhen outcome="commit"/> <cancelWhen outcome="cancel"/> </flow> </flowConfig>
Navigation Case Configuration
The JSF navigation rules defined in faces-config.xml files work just like normal. There is no special syntax there at all, which means that IDE tools will continue to work.
To start a flow, there simply should be a navigation rule from the calling page to the flow entry page's viewId using outcome string (as normal). The only difference is that when the calling page has an <onflow> entry matching that same outcome, then the to-view-id page is expected to have a matching <flow> configuration. When that is not true, an error is reported. When it is true, the to-view-id is executed in a new conversation context.
This does mean that looking at the navigation rules gives no hint as to whether the navigation starts a flow or not. But if a user cares about that, they can use a naming convention for their pages that are meant to be flows, eg always start them with "/flow" or add the word "Flow" as a suffix (custSearchFlow.jsp).
The navigation rules for returning from a flow are of course not defined; the exit page for a flow will specify a navigation outcome that matches the "commitWhen" or "cancelWhen" clause of the current flow. Normal navigation then does not occur; instead navigation occurs back to the calling flow. So IDEs that show navigation rules will not be able to show the correct return flow - but that is obviously simply impossible to do as the return address is effectively dynamically determined at runtime.
Tradeoffs of Configuration via In-Page Components
Advantages:
- IDE support when adding the tag (autocompletion).
- Only one file to look at. Simply browsing a page source shows whether it calls flows or expects to be called as a flow, what parameters are passed, etc.
- No custom xml syntax to learn
- Refactoring-friendly. When the page is moved, the info goes with it.
Disadvantages:
- Does require modifying pages
- Cannot do checking of flow data at system startup, as inspecting pages to extract this data is too complex. So errors like calling a non-existent flow or passing incorrect parameters can only be checked when the call is actually made. The xml-config-file version can do this check much earlier..
Notes:
- We cannot support accessing the parameter data until the embedded component of a called flow has executed. But that really only means that this data is not available in the bindings phase and I cannot imagine why parameter data would be needed when configuring bindings.
Tradeoffs of Configuration via per-page Config File
Advantages:
- No change to pages; adding or removing orchestra flow management can be done without touching the page.
- Allows static testing of correct flow declarations; can scan the webapp to find all flow declarations, and detect inconsistent duplicates. Then can scan all flow calls, and detect mismatched types or params.
- Moderately refactoring-friendly. When the page is moved, the flow.xml file just needs to be moved with it.
- The flow.xml files look almost exactly like the in-page-component alternative.
Disadvantages:
- Another syntax to learn.
The viewhandler logic will be a bit different. First it will need to look for a flow.xml file next to the viewid. When found, it needs to read it and fetch the type. Then it needs to go back and find the onflow.xml file. And the import/export process is then run by ViewHandler before the view exists rather than by o:flow during render. Not too bad though.
Logic Flow
Server-side logic to handle flows goes like this: (should make this an interaction diagram)
A trivial custom NavigationHandler is needed, plus a moderately complex custom ViewHandler.
- Navigation occurs to caller page; view is created. Nothing special happens.
- Postback occurs, the onflow restore places all its data in request scope.
- An action method returns a navigation outcome
A simple custom NavigationHandler caches that outcome in request scope, for
- later use.
- viewhandler first discards child contexts:
- if the current context has children, discard them all. This means that
- when the "back" button is used to go back to a page that has a different contextId then we cancel the nested contexts. It also means that when we have modal flows, and the user forcibly closes the modal flow somehow then uses the parent window we clean up ok.
- Note that this makes it safe to create a child context then allow a GET to execute it; the GET has the contextId in it which allows us to find the relevant info. If a different request is received for some reason, then we automatically clean up later. Issue: if the flow really is in a non-modal popup window then using the parent window for any purpose will kill the flow for the child. Killing the flow isn't too bad; it cannot return anything to the parent so must die. But what happens when a postback specifies a contextId that has been deleted? Do we error, or just start a new context? If just start a new context then this will be very confusing for the user as the page will be in the middle of a flow, but the backing beans will be new (context deleted). But erroring on invalid contextId screws bookmarks; bookmarks will have old contextIds in them that we should just ignore. We could keep a list of "deleted" ids in the session, and error on those - sounds good. But for later.
- viewhandler looks at the outcome:
- if there is no outcome (ie this is a GET):
- if the url is a "flow url", report an error
- if we are in a flow then do DISCARD
- do normal navigation
if there is an <onflow> entry for the current view which matches the
- outcome then do START
if there is a <flow> entry for the current context and the commit-on
- attribute matches the current outcome then do COMMIT
if there is a <flow> entry for the current context and the cancel-on
- attribute matches the current outcome then do CANCEL
- if starts with current flowPath, do nothing (stay in current flow)
- if not a flow url, do DISCARD (navigation goes elsewhere)
- report error (nav to flow, but with no onflow definition).
- viewhandler then creates view as normal
- o:flow render method does this on *first run* only:
- verify that caller onflow definition's type matches the callee type.
- Report error if mismatch found.
- validate parameters match (exact same param and return names)
update the FlowInfo with the return flow info declared here
- run the caller's param expressions using the parent context
- and store the results in a map
- run the callee's param expressions reading from the map
START:
- create new context
- look in the request for the caller's onflow info, and select one
- using the outcome string. If none match (or more than one) report an error.
create a FlowInfo object that holds the "flow path", and caller info
- including the matched onflow clause.
- serialize the previous view and save it in the new context
DISCARD:
- optionally, if the destination is a "flow path" url, then report an error
- invalidate and remove all contexts except the root of the current context.
- make the root context the active one.
COMMIT:
- fetch saved flow info from current context
- run the callee's return expressions and save into map
- discard current context, select parent
- run the caller's return expressions reading from map
- if onreturn action defined, then run it.
- if null navigation is returned from onreturn action then
- + set currentViewId to caller, + restore the saved caller view tree
+ clear the submitted value of any component specified in <modifies>
- (if no modifies clauses, clear all).
- if a non-null navigation is returned from onreturn action, then execute
- the navigation-handler again.
CANCEL:
- discard current context, select parent
- set currentViewId to caller
- restore the saved caller view tree
Process "is a flow" can be implemented in several ways. The easiest are just looking for substring "/flow/" or page name "*Flow.*" in the viewId.
Computing the "flow path": by default, this is the parent directory of the viewId. However when onepage is true, this is the complete viewId. This means that multipage flows must live in their own directory, but that trivial lookup logic which is just one page doesn't need that.
Notes
- This approach effectively mimics a normal static java method call; the callee declares a type and a method prototype. The caller must then pass the right set of parameters. Like a java classpath, the callee can be anywhere as long as it is retains the same fully-qualified class name. Here, the JSP/XHTML file can be anywhere as long as the "type" parameter remains unaltered. The type value can be any user-selected string, but using the name of the flow backing bean or similar seems appropriate.
- This should work well with existing navigation-aware IDEs. The navigation file is totally unaltered, and the viewids are also completely normal - except for special mappings needed for "flow:cancel" and "flow:commit" outcomes, but that is easy enough to map to some dummy page.
- Recursive entry to a flow works naturally. We can just test the outcome against the
The START process is effectively enhancing navigation, as if we had added extra information to a navigation case. But the NavigationHandler API sucks so badly, it is easier to do it in this way.
- The modifies section allows the calling page to control what fields get fresh data after the flow has finished. The whole point of a flow is to change something that the current view is displaying; if the change is just to read-only data then nothing needs to be done as the new data is automatically displayed. And when the flow was started by a non-immediate command then there is no problem as all data was pushed to the model so everything is read fresh. But if the flow was triggered by an immediate command then we have a mix of user data in components that has not been pushed into the model (which we want to keep) and user data in components that is now "stale". We really want to clear out data where the value referenced from the input component has been modified - but that is almost impossible to compute. Instead we need help from the user to tell us which components the "return" and "onreturn" tags have caused modifications to. After the close of the flow, we restore the caller's view tree then render it. Just after restoring it we can clear out any submitted data. It would possibly be even nicer to clear out modified data before invoking the flow, but that means that when a flow is cancelled that data is reset to the model state. It is not normal for a cancelled operation to reset data in the caller.
Issues
h:commandButton (unlike h:commandLink) has no "target" property. So we would need a custom or:popupButton which sets the form target, submits the form, then resets the form target. There is still the issue however that on commit/cancel, we want to close the popup and trigger the parent window to refresh rather than loading the caller page into the popup. We need some mechanism for the post-flow-end page to not be the caller but instead a page that just contains "closing...." and some javascript to poke the parent window. We would also need to hack the postback to be a "refresh" type postback where just the <modifies> sections are processed [unless we are really hacky, and temporarily restore the tree, do modifies processing then reserialize it]. Tricky, as the caller-view was stored in the child context which we have already destroyed. Dang.
- the "onflow" info needs to be non-transient within the tree because we need it again at the end of a postback phase. It is a moderate amount of data which is a nuisance when client-side state is enabled; it is carried on each request just in case the postback navigates to a flow that needs it.
- if data being exported to the called flow is associated with an input component on the current page, then we need that component to push its data into the model in order for it to be available to the called flow. That means that when the button triggering the flow is immediate, then the input component must be immediate too. That is no big deal. However it would be nice if JSF had a way to say "this input component is immediate only when this command component is triggered".
- Is the persistence context of the caller passed to the called flow? If not, then the called flow will not be able to navigate any uninitialised references on persistent objects that the caller passes. And it will not be able to pass back persistent objects to the caller. But the called flow can have many conversations, and it is conversations that have persistence contexts. Maybe there can be yet another tag in the onflow entry,
<persistence conversation="..."/>
which causes the specified conversation to be created immediately in the invoked context, and for the current persistence context to be attached to that conversation. That can be added later though; in the initial implementation, data passed can be restricted to keys, and the onreturn actions can use the keys to load the object using their own context, or merge it in.
- We do not support a GET operation anywhere in a flow. This seems ok.
- What about "ping" type operations, or resource-fetching. We need to ignore these when determining whether to end a flow or not.
- What about back-button usage? There is code in the view-state-saving to handle fetching an old view tree. But things will get really screwy if we don't have the right context active. The contextId is stored in the url, so if someone goes back across the start of a flow, then the child context will just be ignored, and will eventually time out. That is probably ok. Possibly when a context is activated and it has children we could immediately discard those children? Forward buttons would then not work, but that's ok.
- Access to global data. We encourage users to create a "global conversation" and load what used to be session-scoped data into that. But child contexts will not have that information available - unless explicitly passed as parameters. Maybe a global "params" property needs to exist that is always exported to flows? Otherwise people will fall back to putting "global" data into the http session.
Optional extensions
per-page navigation
Possibly we could put navigation rules in the onflow.xml file. The custom navigation handler could look in there and if a mapping is present then use that data instead of the normal navigation case. This gives "decentralised" navigation definitions that are nicely coupled with flows:
<onflow outcome="..." type="...">
- ..params...
<view>newViewId</view>
</onflow>
The syntax is identical to the normal non-navigation case, except that there is a view section.
Rejected approaches
- having the onflow info nested within the commandButton. This implies clearly that the command button is *known* to invoke a flow which has a matching flowType. This isn't too bad, but doesn't elegantly fit with JSF's abstracted navigation concept; it should be possible to configure navrules to go anywhere. Still, we could just say that a nested o:onflow indicates that the navigation must go to a flow *of that type*. But an action can return any of N outcome strings, so we would need to then select by outcome.
- Using custom elements in a navigation-case. In JSF1.2 and later, the xml schema for navigation-case allows extra elements to be added (as long as they are in a "foreign" namespace). But the JSF spec provides no easy way to get at this data. So best to avoid. And anyway, navigation-case entries are ok for annotating the "from" part of a call, but provide no way of annotating the "to" part of a call.
Using annotations to specify flow parameters. It would be possible to use the Orchestra ViewController to look for the controller class for the called view, and then look for annotations on that class to see if it is a flow and what parameters it expects. However this approach does not make sense for the caller, and a mixed system where only one half can sensibly use annotations is not nice. And anyway, this data really belongs at the "page" level rather than backing-bean level. Anyone writing a page that will call a flow needs to be able to easily see what parameters they need to pass; that info is much more easily available via info in the page or info in an associated flow.xml file than when it is attached to the view controller class for the page.
Popup modal window handling
- postback occurs
- navigation to new flow with "modal" flag set detected
- create a new child context but do not activate it
A custom component
<o:modalFlowButton
renders the necessary html to present a submit button that sets form.target and submits, plus script to handle the "closing window" callback.
When the new child context is created, the modal flag is set in the FlowInfo.
When a "cancel" outcome occurs, the ViewHandler directly renders a simple page with javascript that closes the page and does nothing else. When a "commit" outcome occurs, the ViewHandler directly renders a simple page with "closing window..", plus javascript that locates the original o:modalFlowButton DOM object by clientId in the parent window, and calls the "commit" function on it before closing itself.
Refreshing the parent window is still tricky. We need to cause a postback which
Issues: (1) not all browsers allow real modal popup windows to be created. But if parent window is closed or navigates somewhere else then that "close flow" javascript cannot run. That's ok, just ignore failure and close self. (2) if user does not have javascript enabled, then modal flows are not possible. We could default to a non-modal flow in that case, or just do nothing. There really isn't much that can be done about this as the modal flow *needs* to close its window at the end, and cause the other window to refresh. And that cannot be done without javascript. Possibly the "closing window" message could just be left there and the user has to close the popup manually. But requiring the user to refresh the parent page manually to see the changed data is real ugly so better not to allow it at all. (3) how do we handle the "modifies" section? We cannot mess with the data before serving the "close" operation, because we have no way of saving back the view tree in a way that guarantees it will be picked up by the postback. And in the case of client-side state it's really tricky to mess with the client state. But if we delay messing with it until the next postback we no longer have access to the child context, which is where the selected onflow data is held. Maybe the closing window can tell the custom o:modalFlowButton what the selected outcome was, and then it can include that data in the postback so that we can re-select the onflow block and then execute just its modifies section.
Alternative: we could hook into the tomahawk AJAX-style stuff, and send an "update" message that updates fields on the client. But again tricky as we have to mess with the view state. Better not to.
Modal flow handling
- postback occurs
- navigation to new flow with "modal" flag set detected
- + the onFlow clause needs to define "onFlowBegin" and "onFlowEnd" scripts. These scripts must be defined in the calling page, and take a url as a parameter. The onFlowBegin script is responsible for showing the "popup" window and loading the specified url into it. The onFlowEnd script is responsible for hiding the "popup" window and doing a GET or POST of the entire page to the specified url.
- we immediately create a new child context, but do not activate it (as the render phase needs the parent context active)
- Store the curent view tree in the child context. Store the selected o:onflow into the child context execute the param expressions in the parent context, and store the resulting map into the child context.
- store the url of the flow entry page into a request-scoped var. This url includes the contextId of the child context.
- re-render the current view (like null navigation). The rendered o:modalFlow tag this time detects the new-flow url in the request and renders javascript to open the iframe and load that url
- When the page is processed on the browser, a GET is executed to populate the iframe. The new contextId embedded in the url will automatically cause the child context to be activated. We simply allow normal processing to happen; the new view is created and rendered. The o:flow tag in the page will execute, causing the param expressions to import data from the map previously pushed into the child context.
- When the flow eventually navigates to an outcome that is "cancel" then we render an empty page with just enough javascript to close itself.
- When the flow eventually navigates to an outcome that is "commit", we execute return params, then deactivate *but not discard* the child context and execute parent return stuff to update parent context. We then render an empty page with just enough javascript to tell the parent to do a GET or a POST, setting a magic commitContextId=?" or "cancelContextId" field which specifies what child context completed.
- On restore-view, when commitContextId is set then we deserialize the view from the specified child context, clear the modifies fields, execute the onflow return statements, etc. Then remove all child contexts and set renderResponse=true.
The only issue is knowing whether the GET statement belongs to the current context, or whether:
- this should trigger a subflow
- this should trigger discard of the current flow
I don't think we ever need to support creating a subflow with a GET; such a flow could never have any import or export expressions or return address, so is useless.
We can simply look to see whether the url matches the flowPath in the current context to know if a discard is needed. Recursive flows are fine; a flow just navigates to an outcome that it defines itself as an o:onflow. At that point we create the child context.
Note: this is no dangerous timing issue here; if the iframe fails to pop up for any reason, then the next request will be for some parent of the current context
Note: storing the caller view state in the context is mandatory for server-side-state-saving because we do not know whether the flow will push the saved state out of the cache. That would be bad. And for non-popup client-side state it is also mandatory, as the hidden field is long gone. It could be skipped in the case of client-side-state in a model flow, but that would be rather inconsistent. | http://wiki.apache.org/myfaces/OrchestraDialogPageFlowDesign2?highlight=ServiceLoader | CC-MAIN-2014-10 | refinedweb | 4,971 | 61.77 |
> {-# LANGUAGE > GeneralizedNewtypeDeriving, > MultiParamTypeClasses, > OverlappingInstances, > FlexibleInstances, > FlexibleContexts, > TypeFamilies > #-} > > module CFLP.Strategies.CallTimeChoice ( > > CTC, StoreCTC, callTimeChoice > > ) where > > import Data.Maybe > import qualified Data.IntMap as IM > > import Control.Monad > > import CFLP.Control.Monad.Update > import CFLP.Control.StrategyWe define an interface for choice stores that provide an operation to lookup a previously made choice. The first argument of `assertChoice` is a dummy argument to fix the type of the store in partial applications.
> class ChoiceStore c > where > lookupChoice :: Int -> c -> Maybe Int > assertChoice :: MonadPlus m => c -> Int -> Int -> c -> m cA finite map mapping unique identifiers to integers is a `ChoiceStore`. The `assertChoice` operations fails to insert conflicting choices.
>) = > maybe (return (ChoiceStoreIM (IM.insert u x cs))) > (\y -> do guard (x==y); return (ChoiceStoreIM cs)) > (IM.lookup u cs)The operation `labelChoices` takes a unique label and a list of monadic values that can be constrained with choice constraints. The result is a list of monadic actions that are constrained to take the same alternative at every shared occurrence when collecting constraints.
> labeledChoices :: (Monad m, ChoiceStore c, MonadUpdate c m) > => c -> Int -> [m a] -> [m a] > labeledChoices c u xs = > maybe (zipWith constrain [(0::Int)..] xs) > ((:[]).(xs!!)) > (lookupChoice u c) > where constrain n = (update (assertChoice c u n)>>. Transformer for Contexts and Strategies --------------------------------------- We define a transformer for evaluation contexts that adds a choice store.
> data StoreCTC c = StoreCTC ChoiceStoreIM cA transformed store is itself a choice store and dispatches the calls to the internal choice store.
> instance ChoiceStore (StoreCTC c) > where > lookupChoice n (StoreCTC c _) = lookupChoice n c > assertChoice _ n m (StoreCTC c s) > = liftM (`StoreCTC`s) (assertChoice c n m c)The type constructor `StoreCTC` is an evaluation context transformer.
> instance Transformer StoreCTC > where > project (StoreCTC _ s) = s > replace (StoreCTC c _) = StoreCTC cWe define uniform liftings for choice stores: a choice store that is transformed with an arbitrary transformer is still a choice store.
> instance (ChoiceStore c, Transformer t) => ChoiceStore (t c) > where > lookupChoice n = lookupChoice n . project > assertChoice _ n m = inside (\c -> assertChoice c n m c)We define a new type for strategies that ensure call-time choice semantics.
> newtype CTC s a = CTC { fromCTC :: s a } > deriving (Monad, MonadPlus, Enumerable) > > type instance Ctx (CTC s) = StoreCTC (Ctx s) > type instance Res (CTC s) = CTC (Res s)We provide a constructor function that allows us to hide the corresponding newtype constructor.
> callTimeChoice :: s a -> CTC s a > callTimeChoice = CTCThe type `CTC` is a strategy transformer for strategies that have acces to a choice store.
> instance ChoiceStore c => StrategyT c CTC > where > liftStrategy _ = CTC > baseStrategy _ = fromCTC > > extendContext _ = StoreCTC noChoices > extendChoices = labeledChoices > > alterNarrowed c n isn > | isJust (lookupChoice n c) = return True > | otherwise = isn | http://hackage.haskell.org/package/cflp-2009.1.23.2/docs/src/CFLP-Strategies-CallTimeChoice.html | CC-MAIN-2016-50 | refinedweb | 452 | 53.92 |
XmlDocument.Save Method (String)
Saves the XML document to the specified file. If the specified file exists, this method overwrites it.
Assembly: System.Xml (in System.Xml.dll)
White space is preserved in the output file only if PreserveWhitespace is set to true.
The XmlDeclaration of the current XmlDocument object determines the encoding attribute in the saved document. The value of the encoding attribute is taken from the XmlDeclaration.Encoding property. If the XmlDocument does not have an XmlDeclaration, or if the XmlDeclaration does not have an encoding attribute, the saved document will not have one either.
When the document is saved, xmlns attributes are generated to persist the node identity (local name + namespace URI) correctly. For example, the following C# code
generates this xmls attribute <item xmls="urn:1"/>.
This method is a Microsoft extension to the Document Object Model (DOM).
Note that only the Save method enforces a well-formed XML document. All other Save overloads only guarantee a well-formed fragment.
The following example loads XML into an XmlDocument object, modifies it, and then saves it to a file named data.xml.
The data.xml file will contain the following XML: <item><name>wrench</name><price>10.95</price></item>.
Available since 1.1 | https://msdn.microsoft.com/en-us/library/dw229a22?cs-save-lang=1&cs-lang=fsharp | CC-MAIN-2018-05 | refinedweb | 207 | 50.33 |
[SOLVED] malloc fails after calling getOpenFileNames() with native file dialog
I'd like your help please, with a huge memory allocation that fails after calling getOpenFileNames(), but works if I do the same using the non-native Qt file dialog. Here's the shortest piece of code that generates a runtime error, because 400 MBytes could not be allocated:
#include <QApplication> #include <QFileDialog> int main(int argc, char *argv[]) { QApplication a(argc, argv); QStringList files = QFileDialog::getOpenFileNames(); char* x = new char[4*100*1000000]; // allocate 400MB return 0; }
I know that asking for a contiguous block of 400 MB is almost a bug in itself. However, the allocation succeeds if I either
- Don't call the getOpenFileNames() at all
- Call getOpenFileNames() with the QFileDialog:DontUseNativeDialog option
Question: Is there a way to use the Windows native file dialog and make the 400MB allocation succeed afterwards?
Notes:
- I cannot allocate before calling the file dialog, as the size of the allocation depends on the file(s) selected.
- Smaller allocations work, e.g. 100MB.
- I'm on Qt 5.5, Windows 7 64bit, the application is 32bit, mingw.
- Jeroentjehome
Hi,
You probably use up your entire stack/heap memory allocated to your program. Increase it if you need more memory. Calling native dialogs might just allocate less memory and thus leave you room to allocate the insane char buffer.
- mrjj Lifetime Qt Champion
Hi and welcome
a 32bit program on windows can max get 2GB default.
I guess the QFileDialog::getOpenFileNames() fragments memory a bit so
after the call, it is not possible to get contiguous block of 400 MB.
You could try with
QMAKE_LFLAGS += -Wl,--large-address-aware
and see if that allows you to get the block.
(note: never tried that wingw)
Can I ask why you need such huge buffer?
- alex_malyu
Working with arrays of size so close to available memory for the process requires custom memory management. Which always is expensive in terms of efforts required.
And there are always files which are not going to fit anyway,
I would rather invest in handling files by parts.
@mrjj
Hi, thank you for your response. The --large-address-aware linker option has fixed the issue for me. I understand my process now has 3GB of useable memory. This seems to be sufficient to allocate 400MB, even after the file dialog has probably fragmented the memory. I can now allocate even 800+ MB.
I might still decide to avoid the native file dialog.
The huge buffer is for loading a huge image, which is then transferred as one chunk into GPU texture memory for display. This will obviously not work on every PC. As other people have suggested, the correct solution is to avoid such a large alloc. The code is just that way at the moment.
@Jeroentjehome Hi, thanks for your reply, increasing memory by using --large-address-aware has worked around my problem.
@alex_malyu Hi and thanks for replying, you're of course correct with your suggestion for better handling of that huge file. This will have to wait, though...
- mrjj Lifetime Qt Champion
@stefanq
Super.
Thank you for the feedback.
Was not sure the linker flag would work with mingw.
That is one mother of a texture :)
Good luck | https://forum.qt.io/topic/57862/solved-malloc-fails-after-calling-getopenfilenames-with-native-file-dialog/4 | CC-MAIN-2019-30 | refinedweb | 542 | 64.51 |
Hi Guillaume,
you asked about my goals. I think these are my main goals:
1. The user should be able to expose commands without the boilerplate
code. So basically we agree that ideally the user should just define the
Action + annotations in his code
2. The user should only be exposed to the API (again actions + annotations)
3. The solution should ideally be framework agnostic. For example a
blueprint namespace based solution will not work with gemini blueprint
4. The user should be able to do as much of his injections with his
standard framework as possible
So I assume we just differ in goal 3 and you additionally have the goal
of using services only for calling code over a published interface.
So basically if we just look at the user java code our solutions look
very similar. The user defines an action with annotations.
In my whiteboard approach the user only exposes the Action as a service.
The whole work of interpreting the meta data is done in the command
exporter. Unfortunately this is on the other side of an OSGi service. So
we kind of abuse the service to publish a template object. See below why
I agree this is not good but also not that problematic.
In your proposal the work is done in a framework specific extension. I
like your new proposal much more than the first one already as we reuse
the DI of the framework.
So your proposal mainly differs from my goals in the 3rd goal. The
disadvantage I see in your solution is that we have to create an
extension for each framework.
If we assume that we only support aries blueprint and DS then this is
surely doable. Still it would be a lot more effort short term as well as
long term.
See some more comments inline....
On 17.02.2014 16:30, Guillaume Nodet wrote:
> 2014-02-17 16:01 GMT+01:00 Christian Schneider <chris@die-schneider.net>:
>
>> I can very well imagine that this would work. I am not sure if it is a
>> good idea thought.
>> We would recreate parts of the dependency injection framework.
>>
> Well, which parts ? I don't understand. My first attempt was indeed
> creating a very small DI framework, but here, my goal is to delegate
> everything to the real DI. The namespace handler would simply be enhanced
> to discover the actions and the injection points and create blueprint
> metadata out of it. This would obviously require a bit of reflection on
> the classes, but nothing different from your proposal.
>
>
>> In any case it would require that we implement a solution for each DI
>> framework out there. Additionally we would have to keep our extensions up
>> to date with the framework evolution.
>> While I can understand that my approach has its own fallacies I think it
>> would create less issues and less work even in the long term.
>>
>> What I could imagine as a kind of middle ground is to simply have a
>> special processing for the @Command annotation.
>>
>> For example in blueprint we could have:
>>
>> <bean class="MyAction">
>> .... do some injections ---
>> </bean>
>>
>> We then let blueprint do its injections but also create the
>> BlueprintCommand for it and export the service. So this would be very
>> similar to the namespace but not requiring it. Not sure if this is possible
>> in blueprint though. The nice thing with this approach is that it might
>> also work with blueprint annotations. In any case I propose we let the
>> framework do its injections and only work with the fully injected Action.
>
> Ok, so I misunderstood as it seems we don't have the same goals. My point
> is to make typical things easy for the user and not having to write this
> boilerplate code. I.e. in 95% of the use cases, the user should not have to
> deal with blueprint xml or DS specific annotations.
>
> If the user wants to fully reuse its DI, he can do so, but our current
> model is quite verbose. We have such an example for blueprint:
>
>
>
> and for DS:
>
>
>
> So I'm not trying to change the above model, rather keep this model (Action
> / Command), but make things easier for the user to register commands. I
> first got rid of the completers and used annotations to wire them through
> OSGi services, now I want to get rid of exposing the command in the OSGi
> registry manually, even if we delegate to the underlying DI framework.
>
> IIUC, what you're proposing is a new simplified model so that users don't
> have to deal with commands but only with actions. While I don't have any
> problems with discussing a new model, I'd like to avoid misusing our
> current model (and breaking a few OSGi contracts). More importantly, your
> proposal does not help a single bit solving my problem.
In my model there is still command and action. The only difference is
that the Action is exposed as a service while in your model the Action
is only
exposed to the framework extension. The other steps are quite the same I
think.
>
> Though, if my goal can be achieved somehow, there's no real value left in
> your proposal, as it would only help in 1% of the cases that can't be
> covered by my proposal, and I think those will be the same that can't be
> covered by the blueprint xml handler now (i.e. exit and help commands).
I see it quite the other way round. My solution is much simpler and
covers 100% of the problem space. The downside is an unusual usage of
OSGi services.
I think my approach does not really violate any contracts the OSGi spec
poses on services. The fact that we typically use services only with
their interface is
not mandated by the spec as far as I can tell. Still I see that using a
service to expose a template object is quite unusual. So I would really
be interested in a solution that avoids this while also
being framework agnostic.
As both solutions do work I think it is a question of weighing the pros
and cons. I am still searching for a better way though. So lets continue
experimenting for some more time. Perhaps we really find something better.
Christian
--
Christian Schneider
Open Source Architect | http://mail-archives.us.apache.org/mod_mbox/karaf-dev/201402.mbox/%3C53023745.9060008@die-schneider.net%3E | CC-MAIN-2019-26 | refinedweb | 1,061 | 72.66 |
On 9/24/20 1:38 PM, Arvind Sankar wrote: > On Thu, Sep 24, 2020 at 10:58:35AM -0400, Ross Philipson wrote: >> The Secure Launch (SL) stub provides the entry point for Intel TXT (and >> later AMD SKINIT) to vector to during the late launch. The symbol >> sl_stub_entry is that entry point and its offset into the kernel is >> conveyed to the launching code using the MLE (Measured Launch >> Environment) header in the structure named mle_header. The offset of the >> MLE header is set in the kernel_info. The routine sl_stub contains the >> very early late launch setup code responsible for setting up the basic >> environment to allow the normal kernel startup_32 code to proceed. It is >> also responsible for properly waking and handling the APs on Intel >> platforms. The routine sl_main which runs after entering 64b mode is >> responsible for measuring configuration and module information before >> it is used like the boot params, the kernel command line, the TXT heap, >> an external initramfs, etc. >> >> Signed-off-by: Ross Philipson <ross.philip...@oracle.com> > > Which version of the kernel is this based on?
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master branch > >> diff --git a/arch/x86/boot/compressed/head_64.S >> b/arch/x86/boot/compressed/head_64.S >> index 97d37f0..42043bf 100644 >> --- a/arch/x86/boot/compressed/head_64.S >> +++ b/arch/x86/boot/compressed/head_64.S >> @@ -279,6 +279,21 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL) >> SYM_FUNC_END(efi32_stub_entry) >> #endif >> >> +#ifdef CONFIG_SECURE_LAUNCH >> +SYM_FUNC_START(sl_stub_entry) >> + /* >> + * On entry, %ebx has the entry abs offset to sl_stub_entry. To >> + * find the beginning of where we are loaded, sub off from the >> + * beginning. >> + */ > > This requirement should be added to the documentation. Is it necessary > or can this stub just figure out the address the same way as the other > 32-bit entry points, using the scratch space in bootparams as a little > stack? It is based on the state of the BSP when TXT vectors to the measured launch environment. It is documented in the TXT spec and the SDMs. > >> + leal (startup_32 - sl_stub_entry)(%ebx), %ebx >> + >> + /* More room to work in sl_stub in the text section */ >> + jmp sl_stub >> + >> +SYM_FUNC_END(sl_stub_entry) >> +#endif >> + >> .code64 >> .org 0x200 >> SYM_CODE_START(startup_64) >> @@ -537,6 +552,25 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated) >> shrq $3, %rcx >> rep stosq >> >> +#ifdef CONFIG_SECURE_LAUNCH >> + /* >> + * Have to do the final early sl stub work in 64b area. >> + * >> + * *********** NOTE *********** >> + * >> + * Several boot params get used before we get a chance to measure >> + * them in this call. This is a known issue and we currently don't >> + * have a solution. The scratch field doesn't matter and loadflags >> + * have KEEP_SEGMENTS set by the stub code. There is no obvious way >> + * to do anything about the use of kernel_alignment or init_size >> + * though these seem low risk. >> + */ > > There are various fields in bootparams that depend on where the > kernel/initrd and cmdline are loaded in memory. If the entire bootparams > page is getting measured, does that mean they all have to be at fixed > addresses on every boot? Yes that is a very good point. In other places when measuring we make sure to skip things like addresses and sizes of things outside of the structure being measured. This needs to be done with boot params too. > > Also KEEP_SEGMENTS support is gone from the kernel since v5.7, since it > was unused. startup_32 now always loads a GDT and then the segment > registers. I think this should be ok for you as the only thing the flag > used to do in the 64-bit kernel was to stop startup_32 from blindly > loading __BOOT_DS into the segment registers before it had setup its own > GDT. Yea this was there to prevent that blind loading of __BOOT_DS. I see it is gone so I will remove the comment and the place where the flag is set. > > For the 32-bit assembler code that's being added, tip/master now has > changes that prevent the compressed kernel from having any runtime > relocations. You'll need to revise some of the code and the data > structures initial values to avoid creating relocations. Could you elaborate on this some more? I am not sure I see places in the secure launch asm that would be creating relocations like this. Thank you, Ross > > Thanks. > _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org | https://www.mail-archive.com/iommu@lists.linux-foundation.org/msg45091.html | CC-MAIN-2021-10 | refinedweb | 711 | 64.1 |
Perhaps using gobject-introspection, like we do in HarfBuzz...
G-I sucks for non-GObject projects though; a SWIG or even ctypes alternative might be easier. Not sure. Just filing the request for now.
sounds nice. let me take a look.
FWIW providing all of public APIs for language bindings may be hard. some of them isn't designed like OOP concept. this is another topic but may be good to polish them in fontconfig 3 but anyway.
Right...
But just in case, which API pieces do you have in mind exactly?
Given that we use g-i, it assumes the function takes the object in the first argument. in that sense, we can't provide bindings for functions one will mostly uses like FcFont*List(), FcFont*Match(), FcFont*Sort() because these aren't placed at the namespace of FcConfig say. we need to wrap up to make them language-bindings available.
Functions that don't match the naming pattern will be bound as module functions, so that's not a problem. We would get fc.FontSort(cf. | https://bugs.freedesktop.org/show_bug.cgi?format=multiple&id=90274 | CC-MAIN-2019-35 | refinedweb | 178 | 75.71 |
[
]
Robert Zeigler commented on TAP5-745:
-------------------------------------
Complicated. :) (closed)
talks about the spec saying that getText() on the dtd element returns the "internal subset"
of the DTD (which is true in java 1.6 docs; 1.5 docs (via webservices) says simply: String
value of the DTD). Basically, what this means is that there is no implementation-independent
manner of dealing with dtd's right now. Incidentally, the maintenance release of the stax
jsr mentioned in WSTX-78 has been withdrawn. There's a pending maintenance release now, but,
it doesn't touch dtd's.
I've now tried running the full tapestry-core test suite with the patch with:
a) woodstox as the underlying parser
b) stax:stax:1.2.0 as the underlying parser
c) recompiling with java 1.6 and using the built-in parser
a: fails due to getText() returning the empty string for the DTD
b: likewise fails, but additionally fails due to not supporting external entities
c: worst of all. Can't handle namespace prefixes; at least, it ignores them as the code is
now (plus it would make tapestry require java 1.6, which is a nogo; this was more for my curiosity
than anything else).
At this point, it appears that there is no good way to deal with DTD's via the standard stax
API, but properly dealing with DTD's is critically important since client interpretation of
css depends on the doctype used. Thus, I'm closing this issue as "won't fix" and woodstox-independence
will have to wait until either:
1) TAP5-713 is resolved or
2) A user who cares about woodstox independence is willing to step up and provide a patch
that actually works (please: be sure to run all of the tapestry-core tests, at least, and
make sure they pass before submitting! And be sure to test with java 1.5!)
> Remove Woodstox-specific Stax implementation usage
> --------------------------------------------------
>
> Key: TAP5-745
> URL:
> Project: Tapestry 5
> Issue Type: Bug
> Components: tapestry-core
> Affects Versions: 5.1.0.0, 5.1.0.1, 5.1.0.2, 5.1.0.3, 5.1.0.4, 5.1.0.5, 5.1
> Reporter: Christian Köberl
> Assignee: Robert Zeigler
> Priority: Critical
> Attachments: TAP5-745-5.1.0.5.patch
>
>
>. | http://mail-archives.apache.org/mod_mbox/tapestry-dev/200907.mbox/%3C577396624.1246597487853.JavaMail.jira@brutus%3E | CC-MAIN-2016-22 | refinedweb | 379 | 60.55 |
Opened 8 years ago
Closed 5 years ago
#11307 closed enhancement (fixed)
Please support specifying full DN of group whose membership is required to log in
Description
It is sometimes desirable to have LDAP groups used for trac access control in a separate subtree from groups used for e.g. role management (such as granting TRAC_ADMIN privileges).
Currently, the plugin doesn't support specifying group DNs anywhere; all groups are only referred to using their cn, which is not guaranteed to be unique if the search can span several subtrees.
Attachments (0)
Change History (13)
comment:1 Changed 6 years ago by
comment:2 follow-up: 3 Changed 6 years ago by
the group search base dn ist configured via group_basedn
comment:3 Changed 6 years ago by
comment:4 Changed 6 years ago by
So that's not really possible. Groups are expanded into the Trac "namespace" by their CN. What you're suggesting has a few problems:
- the nomenclature is "@" + {groupname} .. so "@admin". You can't use @{DN} as many DN's have spaces and that won't parse.
- If you have multiple CN's with the same name under group_basedn .. with different permission models.. you'll have trouble. The search is going to return the first entry and use that on the lookup, and it's rather non-deterministic as to which one will come first. That's why we use a group_basedn to identify the top of the tree where we know it'd be right.
If you have this case.. i'd suggest you create a sub OU under group called Trac .. and then define your permissions there, unique to the definitions.
You could also explore other ways to resolve this:
- allow more than one group_basedn to be defined, and the ordering of the definition would specify precedence. While this might work it's a penalty in search.. and will be complex.
comment:5 follow-up: 6 Changed 6 years ago by
btw: I dropped the '@' prefix since it caused trouble elsewhere... and I did not see any benefit of this prefix.
And I agree that a structured LDAP tree should have all trac groups as children/grandchildren/etc. under one ou node.
Thus specifying this sinlge ou as the group_basedn and with all permission groups having unique names there is no way for any duplicate.
comment:6 Changed 6 years ago by
btw: I dropped the '@' prefix since it caused trouble elsewhere... and I did not see any benefit of this prefix.
You might want to write a db update to migrate the permissions models for adopters that are using the current implementation. The use of '@' was the original configuration before I took on this plugin before; I left it that way as there were usernames that matched group names in AD... and there were also situations where local 'group' names may have been the same as AD group names (anonymous) .. and this was a good way to make sure the groups were uniquely identified.
Also helped identify the groups from the users... in my the last few implementations we had ~1500 users and ~300 groups. Doesn't hurt to separate it .. but will limit the implementations for some people and won't be backwards compatible unless you update the database as well.
comment:7 Changed 6 years ago by
comment:8 Changed 5 years ago by
comment:9 Changed 5 years ago by
please test with the version 2.1.0-SNAPSHOT
comment:10 Changed 5 years ago by
comment:11 Changed 5 years ago by
comment:12 Changed 5 years ago by
Thanks for working on this!
Unfortunately, my original use case is no longer relevant, so it'll be a while before I can test the new functionality.
comment:13 Changed 5 years ago by
my tests with 'dn' instead of 'cn' were successful
In 14830: | https://trac-hacks.org/ticket/11307 | CC-MAIN-2021-39 | refinedweb | 645 | 71.75 |
Java Interview Questions for Experienced Professionals
Are you looking to upgrade your profile and get a dream job? If yes, this is the perfect place. for Experienced
After completing the beginner and intermediate level interview questions and answers in Java, we have come with the advanced level interview questions of core Java. These interview questions are for the experienced developers in Java. So let’s begin with Java interview questions for experienced professionals.
Q.1. What is JCA in Java?
Answer. The term JCA stands for Java Cryptography Architecture. Sun Microsystems introduced it to implement security functions for the Java platform. JCA provides a platform and gives architecture and APIs for encryption and decryption of data. Many developers use JCA to combine their applications with the security measure. A Java programmer uses JCA to fulfill security measures. JCA also helps in performing the security rules of the third party. JCA uses the hash tables, encryption message digest, etc. to implement the security functions.
Q.2. What is JPA in Java?
Answer. JPA stands for Java Persistence API(Application Programming Interface). JPA is a standard API that allows us to access databases from within Java applications. It also enables us to create the persistence layer for desktop and web applications.
The main advantage of using JPA over JDBC is that JPA represents the data in the form of objects and classes instead of tables and records as in JDBC.
Java Persistence deals with the following:
1. Java Persistence API
2. Query Language
3. Java Persistence Criteria API
4. Object Mapping Metadata
Q.3. What is JMS in Java?
Answer. JMS stands for Java Message Service. JMS helps to create the communication interface between two clients using the message passing services. It helps the application to interact with other components irrespective of the location of the components, whether they depend on the same system or connect to the main system through LAN or the internet.
Q.4. What is a Chained Exception in Java?
Answer. When the first exception causes another exception to execute in a program, such a condition is termed as Chained Exception. Chained exceptions help in finding the root cause of the exception that occurs during the execution of the application.
Below are the constructors that support chained exceptions in Throwable classes:
1. Throwable initCause(Throwable)
2. Throwable(Throwable)
3. Throwable(String, Throwable)
4. Throwable getCause()
Q.5. State the differences between JAR and WAR files in Java?
Answer. The differences between the JAR file and the WAR file are the following:
- JAR file stands Java Archive file that allows us to combine many files into a single file. Whereas, WAR files stand for Web Application Archive files that store XML, java classes, and JavaServer pages, etc., for Web Application purposes.
- JAR files hold Java classes in a library. Whereas, WAR files store the files in the ‘lib’ directory of the web application.
- All the enterprise Java Bean classes and EJB deployment descriptors present in the EJB module are packed and stored in a JAR file with .jar extension. Whereas, the WAR file contains the web modules such as Servlet classes, GIFs, HTML files, JSP files, etc., with .war extension.
Q.6. What is the dynamic method dispatch in Java?
Answer. Dynamic Method Dispatch is also called runtime polymorphism. It is a method in which the overridden method is resolved during the runtime, not during the compilation of the program. More specifically, the concerned method is called through a reference variable of a superclass.
Q.7. How does HashMap work in Java?
Answer. A HashMap in Java works by storing key-value pairs. The HashMap uses a hash function and requires the hashCode() and equals() methods to put elements into a collection and retrieve them from the collection. On the invocation of the put() method, the HashMap calculates the hash value of the key and then stores the pair in the particular index inside the collection. If there is a key, it updates the value of the key with the new value. Some important characteristics of a HashMap are its capacity, its load factor, and the threshold resizing.
Q.8. What are the differences between HashMap and Hashtable?
Answer. The differences between HashMap and Hashtable in Java are:
- Hashtable is synchronized while HashMap is not synchronized. For the same reason, HashMap works better in non-threaded applications, because unsynchronized objects typically perform better than synchronized ones.
- Hashtable does not allow null keys or null values whereas HashMap allows one null key and any number of null values.
- One of the subclasses of HashMap is LinkedHashMap, so if we want a predictable iteration order in the event, we can easily swap out the HashMap for a LinkedHashMap. But, this would not be as easy using Hashtable.
Q.9. What is the role of System.gc() and Runtime.gc() methods in Java?
Answer. System class contains a static method called gc() for requesting JVM to run Garbage Collector. Using Runtime. getRuntime(). gc() method, the Runtime class allows the application to interact with the JVM in which the application is running. Both the methods System.gc() and Runtime.gc() help to give a hint to the JVM, so that JVM can start a garbage collection. However, it is up to the Java Virtual Machine (JVM) to start the garbage collection immediately or later in time.
Q.10. Does not overriding hashCode() method have any impact on performance?
Answer. A poor hashCode() function will result in the frequent collision in HashMap. This will eventually increase the time for adding an object into HashMap. But, from Java 8 onwards, the collision will not impact performance as much as it does in earlier versions. This is because after crossing a threshold value, the linked list gets replaced by a binary tree, which will give us O(logN) performance in the worst case as compared to O(n) of a linked list.
Q.11. What happens when we create an object in Java?
Answer. Following things take place during the creation of an object in Java:
- Memory allocation: Memory allocation takes place to hold all the instance variables of the class and implementation-specific data of the object.
- Initialization: Initialization occurs to initialize the objects to their default values.
- Constructor: Constructors invoke the constructors for their parent classes. This process goes on until the constructor for java.langObject is called. The java.lang.Object class is the base class for all objects in Java.
- Execution: Before the execution of the body of the constructor, all the instance variables should be initialized and there must be the execution of all the initialization blocks. After that, the body of the constructor is executed.
Q.12. When do you override hashCode() and equals() methods in Java?
Answer. We override the hashCode() and equals() methods whenever it is necessary. We override them especially when we want to do the equality check based upon business logic rather than object equality. For example, two employee objects are equal if they have the same empId, despite the fact that they both are two different objects, created using different parts of the code.
Also overriding both these methods is a must when we need them as keys in HashMap. As a part of the equals-hashcode contract in Java, when you override the equals() method, we must override hashCode() as well, otherwise, the object will not break invariants of classes. For example, the Set, Map which relies on the equals() method for functioning properly.
Q.13. What will be the problem if you do not override the hashCode() method?
Answer. If we do not override the equals() method, then the contract between equals and hashcode will not work. So the two objects which are equal by equals() method must have the same hashcode. In this case, another object may return a different hash code and will be stored in that location. This breaks the invariants of the HashMap class because they do not allow duplicate keys.
When we add the object using the put() method, it iterates through the whole Map.Entry objects present in that bucket location. It also updates the value of the previous mapping value if Map already contains that key. This will not work if we do not override the hashcode()method.
Q.14. What is the difference between creating the String as a literal and with a new operator?
Answer. When we create an object of String in Java using a new() operator, it is created in a heap memory area and not into the String pool. But when we create a String using literal, then it gets stored in the String pool itself. The String pool exists in the PermGen area of heap memory.
For example,
String str = new String("java");
The above statement does not put the String object str in the String pool. We need to call the String.intern() method to put the String objects into the String pool explicitly.
It is only possible when we create a String object as String literal.
For example,
String str1 = "java";
Java automatically puts the String object into the String pool.
Q.15. Which are the different segments of memory?
Answer.
- Stack Segment: The stack segment contains the local variables and reference variables. Reference variables hold the address of an object in the heap segment.
- Heap Segment: The heap segment contains all the objects that are created during runtime. It stores objects and their attributes (instance variables).
- Code Segment: The code segment stores the actual compiled Java bytecodes when loaded.
Q.16. Does the garbage collector of Java guarantee that a program will not run out of memory?
Answer. There is no guarantee that using a Garbage collector will ensure that the program will not run out of memory. As garbage collection is an automatic process, programmers need not initiate the garbage collection process explicitly in the program. A Garbage collector can also choose to reject the request and therefore, there is no guarantee that these calls will surely do the garbage collection. Generally, JVM takes this decision based on the availability of space in heap memory.
Q.17. Describe the working of a garbage collector in Java.
Answer. Java Runtime Environment(JRE) automatically deletes objects when it determines that they are no longer useful. This process is called garbage collection in Java. Java runtime supports a garbage collector that periodically releases the memory from the objects that are no longer in need.
The Java Garbage collector is a mark and sweeps garbage collector. It scans dynamic memory areas for objects and marks those objects that are referenced. After finding all the possible paths to objects are investigated, those objects that are not marked or not referenced) are treated like garbage and are collected.
Q.18. What is a ThreadFactory?
Answer. A ThreadFactory is an interface in Java that is used to create threads rather than explicitly creating threads using the new Thread(). It is an object that creates new threads on demand. The Thread factory removes hardwiring of calls to new Thread and enables applications to use special thread subclasses, and priorities, etc.
Q.19. What is the PermGen or Permanent Generation?
Answer. PermGen is a memory pool that contains all the reflective data of the Java Virtual Machine(JVM), such as class, objects, and methods, etc. The Java virtual machines that use class data sharing, the generation is divided into read-only and read-write areas. Permanent generation contains the metadata required by JVM to describe the classes and methods used in Java application. Permanent Generation is populated by the JVM during the runtime on the basis of classes used by the application. Additionally, Java SE(Software Edition) library classes and methods may also be stored in the PermGen or Permanent generation.
Q.20. What is a metaspace?
Answer. The Permanent Generation or PermGen space has been completely removed and replaced by a new space called Metaspace. The result of removing the PermGen removal is that the PermSize and MaxPermSize JVM arguments are ignored and we will never get a java.lang.OutOfMemoryError: PermGen error.
Q.21. What is the difference between System.out, System.err and System.in?
Answer. Both System.out and System.err represent the Monitor by default. Hence they are used to send or write data or results to the monitor. System.out displays normal messages and results on the monitor whereas System.err displays the error messages. System.in represents an InputStream object, which by default represents a standard input device, that is, the keyboard.
Q.22. Why is the Char array preferred over String for storing passwords?
Answer. As we know that String is immutable in Java and stored in the String pool. Once we create a String, it stays in the String pool until it is garbage collected. So, even though we are done with the password it is still available in memory for a longer duration. Therefore, there is no way to avoid it.
It is clearly a security risk because anyone having access to a memory dump can find the password as clear text. Therefore, it is preferred to store the password using the char array rather than String in Java.
Q.23. What is the difference between creating an object using new operator and Class.forName().newInstance()?
Answer. The new operator statically creates an instance of an object. Whereas, the newInstance() method creates an object dynamically. While both the methods of creating objects effectively do the same thing, we should use the new operator instead of Class.forName(‘class’).getInstance().
The getInstance() method uses the Reflection API of Java to lookup the class at runtime. But, when we use the new operator, Java Virtual Machine will know beforehand that we need to use that class and therefore it is more efficient.
Q.24. What are the best coding practices that you learned in Java?
Answer. If you are learning and working on a programming language for a couple of years, you must surely know a lot of its best practices. The interviewer just checks by asking a couple of them, that you know your trade well. Some of the best coding practices in Java can be:
- Always try to give a name to the thread, this will immensely help in debugging.
- Prefer to use the StringBuilder class for concatenating strings.
- Always specify the size of the Collection. This will save a lot of time spent on resizing the size of the Collection.
- Always declare the variables as private and final unless you have a good reason.
- Always code on interfaces instead of implementation.
- Always provide dependency on the method instead they get it by themselves. This will make the coding unit testable.
Q.25. What is CountDownLatch in Java?
Answer. CountDownLatch in Java is like a synchronizer. It allows a thread to wait for one or more threads before starting the process. CountDownLatch is a very crucial requirement and we often need it in server-side core Java applications. Having this functionality built-in as CountDownLatch simplifies the development.
CountDownLatch in Java was introduced on Java 5 along with other concurrent utilities like CyclicBarrier, Semaphore, ConcurrentHashMap, and BlockingQueue. These are all present in the java.util.concurrent package.
Java Interview Questions for Experienced Developers
As time is changing and the competition is increasing day by day, gone are the days when the interview questions used to be very simple and straightforward. Now you have to get ready with tricky interview questions as well:
Q.26. What is CyclicBarrier in Java?
Answer. The CyclicBarrier class is present in the java.util.concurrent package. It is a synchronization mechanism that synchronizes threads progressing through some algorithm. CyclicBarrier class is a barrier at which all the threads until all threads reach it.
A CyclicBarrier is used when multiple threads carry out different subtasks and there is a need to combine the output of these subtasks to form the final output. After completing its execution, threads call the await() method and wait for other threads to reach the barrier.
Q.27. Differentiate between CountDownLatch and CyclicBarrier in Java?
Answer. Both CyclicBarrier and CountDownLatch are useful tools for synchronization between multiple threads. However, they are different in terms of the functionality they provide.
CountDownLatch allows one or more than one thread to wait for a number of tasks to complete while CyclicBarrier allows a number of threads to wait on each other. In short, CountDownLatch maintains a count of tasks whereas CyclicBarrier maintains a count of threads.
When the barrier trips in the CyclicBarrier, the count resets to its original value. CountDownLatch is different because the count never resets to the original value.
Q.28. What is the purpose of the Class.forName method?
Answer. This forName() method loads the driver that establishes a connection to the database. The forName() method belongs to java.lang.Class class.. Why does the Collection interface not extend the Cloneable or Serializable interfaces?
Answer. The Collection interface does not extend the Cloneable or Serializable interfaces because the Collection is the root interface for all the Collection classes like ArrayList, LinkedList, HashMap, etc. If the collection interface extends Cloneable or Serializable interfaces, then it is making compulsory for all the concrete implementations of this interface to implement Cloneable and Serializable interfaces. Collection interfaces do not extend Cloneable or Serializable interfaces to give freedom to concrete implementation classes.
Q.30. What is the advantage of using getters and setters?
Answer. Getters and Setters methods are used to get and set the properties of an object. The advantages are:
- We can check if new data is valid before setting a property.
- We can perform an action on the data which we are getting or setting on a property.
- We can control which properties we can store and retrieve.
Q.31. What is RMI?
Answer. RMI in Java stands for Remote Method Invocation. RMI is an API in Java that allows an object residing in one system or JVM to access or invoke an object running on another system or JVM. RMI is used to create distributed applications in Java. It provides remote communication between Java programs using two objects: stub and skeleton. It is present in the package java.rmi.
Q.32. State the basic principle of RMI architecture?
Answer. The principle of RMI architecture states that “the definition of the behavior and the implementation of that behavior are treated as separate concepts. Remote method Invocation allows the code that defines the behavior and the code that implements the behavior to remain separate and to run on separate JVMs”.
Q.33. What is the role of using Remote Interface in RMI?
Answer. A Remote Interface is an interface that is used to declare a set of methods that we can invoke from a remote Java Virtual Machine. The java.rmi.Remote interface is a marker interface that defines no methods:
public interface Remote {}
A remote interface must satisfy the following conditions:
- A remote interface should extend at least the java.rmi.Remote interface, either directly or indirectly.
- The declaration of Each method in a remote interface or its super-interfaces must satisfy the following requirements of a remote method declaration:
— The declaration of the remote method must include the exception of java.rmi.RemoteException in its throws clause.
— A remote object that is declared as a parameter or return value must be declared as the remote interface in a remote method declaration, not the implementation class of that interface.
Q.34. What is the role of java.rmi.Naming Class in RMI?
Answer. The Naming class of java.rmi package provides methods for storing and obtaining references to remote objects in a remote object registry. The methods of the java.rmi.Naming class makes calls to a remote object. This implements the Registry interface using the appropriate LocateRegistry.getRegistry method.
The Naming class also provides methods to get and store the remote object. The Naming class provides five methods:
Q.35. What is meant by binding in RMI?
Answer. Binding is the process of registering or associating a name for a remote object, that we can use later in order to look up that remote object. It associates the remote object with a name using the bind() or rebind() methods of the Naming class of java.rmi package.
Q.36. What is the purpose of RMISecurityManager in RMI?
Answer. RMISecurityManager is a class in the RMI package of Java. It provides a default security manager for RMI applications that need it because they use downloaded code. The classloader of RMI’s will not download any classes if the user has not set any security manager. We cannot apply RMISecurityManager to applets that run under the protection of the security manager of their browser.
To set the RMISecurityManager, we need to add the following to an application’s main() method:
System.setSecurityManager(new RMISecurityManager());
Q.37. Explain Marshalling and unmarshalling.
Answer. Marshalling: When a client invokes a method that accepts parameters on a remote object, it bundles the parameters into a message before sending it over the network. These parameters can be of primitive type or objects. When the parameters are of primitive type, they are put together and a header is attached to it. If the parameters are objects, then they are serialized. This process is called marshalling.
Unmarshalling: The packed parameters are unbundled at the server-side, and then the required method is invoked. This process is called unmarshalling.
Q.38. What are the layers of RMI Architecture?
Answer. There are three layers of RMI architecture: the Stub and Skeleton Layer, the Remote Reference Layer, and the Transport Layer.
- The stub and skeleton layer helps in marshaling and unmarshaling the data and transmits them to the Remote Reference layer and receives them from the Remote Reference Layer.
- The Remote reference layer helps in carrying out the invocation. This layer manages the references made by the client to the remote object.
- The Transport layer helps in setting up connections, managing requests, monitoring the requests, and listening to incoming calls.
Q.39. What is the difference between a synchronized method and a synchronized block?
Answer. The differences between a synchronized method and a synchronized block are:
1. A synchronized method uses the method receiver as a lock. It uses ‘this’ for non-static methods and the enclosing class for static methods. Whereas, the synchronized blocks use the expression as a lock.
2. A synchronized method locks on that object only in which the method is present, while a synchronized block can lock on any object.
3. The synchronized method holds the lock throughout the method scope. While the lock is held only during that block scope, also known as the critical section in the synchronized block.
4. If the expression provided as parameter evaluates to null, the synchronized block can throw NullPointerException while this is not the case with synchronized methods.
5. The synchronized block offers granular control overlock because we can use any lock to provide mutual exclusion to critical section code. The synchronized method always locks either class level lock on the current object, if its static synchronized method.
Q.40. Write a simple program on a synchronized block.
Answer.
Program of Synchronized Block:
class Table { void printTable(int n) { synchronized(this) { //synchronized block for (int i = 1; i <= 5; i++) { System.out.println(n * i); try { Thread.sleep(400); } catch(Exception e) { System.out.println(e); } } } } //end of the method } class MyThread1 extends Thread { Table t; MyThread1(Table t) { this.t = t; } public void run() { t.printTable(5); } } public class Test { public static void main(String args[]) { Table obj = new Table(); //only one object MyThread1 t1 = new MyThread1(obj); t1.start(); } }
Q.41. Differentiate between Serial and Throughput Garbage collectors?
Answer. Serial Garbage collector uses one thread to perform garbage collection in Java. On the other hand, Throughput garbage collector uses multiple threads to perform garbage collection.
We can use Serial Garbage Collector for applications that run on client-style machines and do not have low pause time requirements. Throughput Garbage Collector can be chosen for applications that have low pause time requirements.
Q.42. What is Double Brace initialization in Java?
Answer. Double brace initialization in Java is a combination of two separate Java processes. When we use the initialization block for an anonymous inner class it becomes double brace initialization in Java. The inner class that we created will have a reference to the enclosing outer class. We can use that reference using the ‘this’ pointer.
Q.43. What is Connection Pooling in Java?
Answer. Connection pooling is a mechanism where we create and maintain a cache of database connections. Connection Pooling has become the standard for middleware database drivers. A connection pool creates the connections ahead of time. When there is a JDBC connection pool, there is a creation of a pool of Connection objects when the application server starts.
Connection pooling is used to create and maintain a collection of JDBC connection objects. The primary objective of connection pooling is to leverage reusability and improve the overall performance of the application.
Q.44. Differentiate between an Applet and a Java Application?
Answer.
Advanced Java Interview Questions – JSPs & Servlets
Q.45. What is a JSP Page?
Answer. A JSP(Java Server Page) page is a text document that has two types of text: static data and JSP elements. We can express Static data in any text-based format such as HTML, SVG, WML, and XML. JSP elements construct dynamic content.
The file extension used for the JSP source file is .jsp. The JSP page can contain a top file that includes other files containing either a fragment of a JSP page or a complete JSP page. The extension used for the source file of a fragment of a JSP page is .jspf.
The elements of JSP in a JSP page can be expressed in two syntaxes: standard and XML. But, any file can use only one syntax.
Q.46. What is a Servlet?
Answer. A servlet in Java is a class that extends the capabilities of servers that host applications accessed using a request-response programming model. Servlets can be used to respond to any type of request, but they commonly extend the applications hosted by web servers.
A servlet handles requests, processes them, and replies back with a response. For example, a servlet can take input from a user using an HTML form, trigger queries to get the records from a database and create web pages dynamically.
The primary purpose of the Servlet is to define a robust mechanism to send content to a client-defined by the Client/Server model. The most popular use of servlets is for generating dynamic content on the Web and have native support for HTTP.
Q.47. How are the JSP requests handled?
Answer. When the JSP requests arrive, the browser first requests a page that has a .jsp extension. Then, the webserver reads the request. The Web server converts the JSP page into a servlet class using the JSP compiler. The JSP file gets compiled only on the first request of the page, or if there is any change in the JSP file. The generated servlet class is invoked to handle the browser’s request. The Java servlet sends the response back to the client when the execution of the request is over.
Q.48. What are Directives?
Answer. JSP directives are the elements or messages of a JSP container. They are the part of a JSP source code that guides the web container to translate the JSP page into its respective servlet. They provide global information about an entire JSP page.
Directives are instructions that JSP engine processes to convert a page into a servlet. Directives set page-level instructions, insert data from external files, and specify custom tag libraries. There can be many comma-separated values in directives. Directives are defined between < %@ and % >.
Q.49. What are the different types of Directives present in JSP?
Answer. The different types of directives are:
- Include directive: The include directive is useful to include a file. It merges the content of the file with the current page.
- Page directive: The page directive defines specific attributes in the JSP page, such as error page and buffer, etc.
- Taglib: Taglib is used to declare a custom tag library used on the page.
Q.50. What are JSP actions?
Answer. JSP actions use constructs in XML syntax that are used to control the behavior of the servlet engine. JSP actions are executed when there is a request for a JSP page. We can insert JSP actions dynamically into a file. JSP actions reuse JavaBeans components, forward the user to another page, and generate HTML for the Java plugin.
Some of the available JSP actions are listed below:
- jsp:include: It includes a file when there is a request for a JSP page.
- jsp:useBean: It instantiates or finds a JavaBean.
- jsp:setProperty: It is used to set the property of a JavaBean.
- jsp:getProperty: It is used to get the property of a JavaBean.
- jsp:forward: It forwards the requester to a new page.
- jsp:plugin: It generates browser-specific code.
Q.51. What are Declarations?
Answer. Declarations in JSP are similar to variable declarations in Java. They are used to declare variables for subsequent use in expressions or scriptlets. It is necessary to use the sequences to enclose your declarations to add a declaration.
Q.52. What are Expressions?
Answer. An expression in JSP is used to insert the value of a scripting language expression. It converts them into a string, into the data stream returned to the client, by the webserver. Expressions are defined between <% = and %> tags.
Expression Tag in JSP writes content on the client-side. This tag displays information on the client browser. The JSP Expression tag converts the code into an expression statement that turns into a value in the form of string object and inserts into the implicit output object.
Q.53. Explain the architecture of a Servlet.
Answer. The core abstraction that all servlets must implement is javax.servlet.Servlet interface. Every servlet must implement this interface either directly or indirectly. The servlet can implement it either by extending javax.servlet.http.HTTPServlet or javax.servlet.GenericServlet. Each servlet should be able to serve multiple requests in parallel using multithreading.
Q.54. State the difference between sendRedirect and forward methods?
Answer. The sendRedirect() method creates a new request, whereas the forward() method forwards the request to a new target. The scope objects of the previous request are not available after a redirect, because it results in a new request. On the other hand, the scope objects of the previous request are available after forwarding. Generally, the sendRedirect method is considered to be slower as compared to the forward method.
Applet Java Interview questions
Q.55. What is an Applet?
Answer. An applet is a Java program that is embedded into a web page. An applet runs inside the web browser and works at the client-side. We can embed an applet in an HTML page using the APPLET or OBJECT tag and host it on a web server. Applets make the website more dynamic and entertaining.
Q.56. Explain the life cycle of an Applet.
Answer.
The above diagram shows the life cycle of an applet that starts with the init() method and ends with destroy() method. Other methods of life cycle are start(), stop() and paint(). The methods init() and destroy() execute only once in the applet life cycle. Other methods can execute multiple times.
Below is the description of each method of the applet life cycle:
init(): The init() is the initial method that executes when the applet execution starts. In this method, the variable declaration and initialization operations take place.
start(): The start() method contains the actual code to run the applet. The start() method runs immediately after the init() method executes. The start() method executes whenever the applet gets restored, maximized, or moves from one tab to another tab in the browser.
stop(): The stop() method is used to stop the execution of the applet. The stop() method executes when the applet gets minimized or moves from one tab to another in the browser.
destroy(): The destroy() method gets executed when the applet window or tab containing the webpage closes. The stop() method executes just before the invocation of destroy() method The destroy() method deletes the applet object from memory.
paint(): The paint() method is used to redraw the output on the applet display area. The paint() method executes after the execution of start() method and whenever the applet or browser is resized.
Q.57. What happens when an applet is loaded?
Answer. When the applet is loaded, first of all, an object of the applet’s controlling class is created. Then, the applet initializes itself and finally starts running.
Q.58. What is the applet security manager? What does it provide?
Answer. The applet security manager class is a mechanism to impose restrictions on Java applets. A browser can have only one security manager. It is established at startup, and after that, we cannot replace, overload, override, or extend it.
Q.59. What are the restrictions put on Java applets?
Answer. Following restrictions are put on Java applets:
- An applet cannot define native methods or load libraries.
- An applet cannot write or read files on the execution host.
- An applet cannot read some system properties.
- An applet cannot make network connections except the host from which it came.
- An applet cannot initiate any program on the host which is executing it.
Q.60. What are untrusted applets?
Answer. Untrusted applets are those applets in Java that cannot access or execute local system files. By default, all downloaded applets are treated as untrusted. Untrusted applets can not perform operations such as reading, writing or deleting files from the local file system. They are not allowed to access files on the local computer and access the network connections from the computer.
Q.61. What is the difference between a ClassNotFoundException and NoClassDefFoundError?
Answer. ClassNotFoundException and NoClassDefFoundError exceptions occur when a particular class is not found during the runtime. However, they differ from each other and occur in different scenarios.
A ClassNotFoundException is an exception that occurs when we try to load a class during the runtime using methods like Class.forName() or loadClass() methods and these classes are not found in the classpath. Whereas NoClassDefFoundError is an error that occurs when a particular class is present at compile-time but missing at run time.
Q.62. What Are The Attributes Of Applet Tags?
Answer.
- height: It defines the height of applet.
- width: It defines the width of the applet.
- align: It defines the text alignment around the applet.
- alt: It is an alternate text that is to be displayed if the browser supports applets but cannot run this applet.
- code: It is an URL that points to the class of the applet.
- codebase: It indicates the base URL of the applet if the code attribute is relative.
- hspace: It defines the horizontal spacing around the applet.
- vspace: It defines the vertical spacing around the applet.
- name: It defines a name for an applet.
- object: It defines the resource name that contains a serialized representation of the applet.
- title: It displays information in the tooltip.
Q.63. What is the difference between applets loaded from the internet and applets loaded via the file system?
Answer. When an applet is loaded from the internet, the applet gets loaded by the applet classloader and there are restrictions enforced on it by the applet security manager. When an applet is loaded from the client’s local file system, the applet is loaded by the file system loader.
Applets that are loaded via the file system are allowed to read files, write files, and to load libraries on the client. Also, they are allowed to execute processes and are not passed through the byte code verifier.
Q.64. What is the applet class loader?
Answer. When an applet gets loaded over the internet, the applet classloader loads the applet. The applet class loader enforces the Java namespace hierarchy. The classloader also guarantees that a unique namespace exists for classes that come from the local file system, and there exists a unique namespace for each network source.
When an applet is loaded by the browser over the internet, the classes of that applet are placed in a private namespace associated with the origin of the applet. After that, the classes loaded by the class loader are passed through the verifier. The verifier checks that the class file matches the Java language specification. The verifier also ensures that there are no stack overflows or underflows and that the parameters to all bytecode instructions are correct.
Q.65. What is the difference between an event-listener interface and an event-adapter class?
Answer. An EventListener interface defines the methods that an EventHandler must implement for a particular kind of event whereas an EventAdapter class provides a default implementation of an EventListener interface.
Q.66. What are the advantages of JSP?
Answer. The advantages of using the JSP are:
- JSP pages are compiled into servlets and therefore, the developers can easily update their presentation code.
- JSP pages can be precompiled.
- Developers can easily combine JSP pages to static templates, including HTML or XML fragments, with code that generates dynamic content.
- Developers can offer customized JSP tag libraries. The page authors can access these libraries using an XML-like syntax.
- Developers can make changes in logic at the component level, without editing the individual pages that use the application’s logic.
Q.67. What are Scriptlets?
Answer. A scriptlet in Java Server Pages (JSP) is a piece of Java code that is embedded in a JSP page. The scriptlet is everything that is present inside the tags. A user can add any valid scriptlet between these tags.
Q.68. What is meant by JSP implicit objects and what are they?
Answer. JSP implicit objects are those objects in Java that the JSP container makes available to developers on each page. A developer can call these objects directly without declaring them explicitly. JSP Implicit Objects are also called pre-defined variables. The objects are considered as implicit in a JSP page are:
- application
- page
- request
- response
- session
- exception
- out
- config
- pageContext
Q.69. State the difference between GenericServlet and HttpServlet?
Answer. GenericServlet is a protocol-independent and generalized servlet that implements the Servlet and ServletConfig interfaces. The servlets extending the GenericServlet class must override the service() method. Finally, if you need to develop an HTTP servlet for use on the Web that serves requests using the HTTP protocol, your servlet must extend the HttpServlet.
Q.70. State the difference between an Applet and a Servlet?
Answer. An Applet is a client-side Java program that runs on a client-side machine within a Web browser. Whereas, a Java servlet is a server-side component that runs on the webserver. An applet uses the user interface classes, while a servlet does not have a user interface. Instead, a servlet waits for HTTP requests from clients and generates a response in every request.
Q.71. Explain the life cycle of a Servlet.
Answer. The Servlet Engine loads the servlets on every client’s request, and invokes its init methods, for the servlet to be initialized. Then, the object of the Servlet handles all subsequent requests coming from that client, by invoking the service() method for each request separately. Finally, the servlet gets removed by calling the destroy() method.
The life cycle of the servlet is:
- Servlet class gets loaded.
- Creation of Servlet instance.
- init() method gets invoked.
- service() method is invoked.
- destroy() method is invoked.
Q.72. Differentiate between doGet() and doPost()?
Answer. doGet(): The doGet() method appends the name-value pairs on the URL of the request. Therefore, there is a restriction on the number of characters and subsequently on the number of values used in a client’s request. Also, it makes the values of the request visible, and thus, sensitive information must not be passed in that way.
doPost(): The doPost() method overcomes the limit of the GET request. it sends the values of the request inside its body. Furthermore, there are no limitations on the number of values to be sent across. Finally, the sensitive information that is passed through a POST request is not visible to an external client.
Q.73. What is the difference between final, finalize, and finally?
Answer. Below is a list of differences between final, finally and finalize:
Java Developer Interview Questions
These questions are frequently asked from Java developers during the interviews:
Q.74. What is a Server Side Include (SSI)?
Answer. Server Side Includes (SSI) is a simple and interpreted server-side scripting language. SSI is used almost exclusively for the Web. It is embedded with a servlet tag. Including the contents of one or more than one file into a Web page on a Web server is the most frequent use of SSI. When a browser accesses a Web page, the Web server replaces the servlet tag on that Web page with the hypertext generated by the corresponding servlet.
Q.75. What is Servlet Chaining ?
Answer. Servlet Chaining is the mechanism where the output of one servlet is sent to the second servlet. The output of the second servlet is sent to a third servlet, and so on. The last servlet in the servlet chain is responsible for sending the response to the client.
Q.76. How can you find out what client machine is making a request to your servlet ?
Answer. There is a ServletRequest class that has functions for finding out the IP address or hostname of the client machine. The getRemoteAddr() method gets the IP address of the client machine and getRemoteHost() method gets the hostname of the client machine.
Q.77. What is the structure of the HTTP response?
Answer. The HTTP response has three parts:
- Status Code: The status code describes the status of the response. We can use it to check if the request has been successfully completed or not. In case the request fails, we can use the status code to find out the reason behind the failure. If our servlet does not return a status code, then by default, the success status code, HttpServletResponse.SC_OK is returned.
- HTTP Headers: HTTP headers contain more information about the response. For example, they may specify the date or time after which the response is considered stale, or the type of encoding used to safely transfer the entity to the user.
- Body: The body contains the content of the HTTP response. The body contains HTML code, images, etc. The body also consists of the data bytes transmitted in an HTTP transaction message immediately following the headers.
Q.78. What is a cookie? Differentiate between session and cookie?
Answer. A cookie is a small piece of data that the Web server sends to the browser. The browser stores the cookies for each Web server in a local file. Cookies provide a reliable mechanism for websites to remember stateful information or to record the browsing activity of users.
The differences between the session and a cookie are:
- The session should work irrespective of the settings on the client’s browser. The client can choose to disable cookies. However, the sessions still work because the client has no ability to disable them on the server-side.
- The session and cookies are also different in the amount of information they can store. The HTTP session can store any Java object, while a cookie can only store String objects.
Q.79. Which protocol can be used by browser and servlet to communicate with each other?
Answer. The browser uses the HTTP protocol to communicate with a servlet.
Q.80. What is HTTP Tunneling?
Answer. HTTP Tunneling is a mechanism that encapsulates the communications performed using various networks using the HTTP or HTTPS protocols. Therefore, the HTTP protocol acts as a wrapper for a channel that the network protocol being tunneled uses to communicate. HTTP Tunneling is the masking of other protocol requests as HTTP requests.
Q.81. What are the differences between sendRedirect and forward methods?
Answer. The sendRedirect() method creates a new request, whereas the forward() method forwards a request to a new target. After using a redirect, the previous request scope objects are not available because it results in a new request. While, after using the forwarding, the previous request scope objects are available. Generally, the sendRedirect method is considered to be slower compared to the forward method.
Q.82. What is URL Encoding and URL Decoding?
Answer. The URL encoding is a procedure responsible for replacing all the spaces and every other extra special character of a URL and converts them into their corresponding Hex representation. URL decoding is the exact opposite procedure of URL Encoding.
Q.83. What is a JavaBean?
Answer. A Bean in Java is a software component that was designed to be reusable in a variety of different environments. Java beans can be visually manipulated in the builder tool. Java Beans can perform simple functions, such as checking the spelling of a document or complex functions such as forecasting the performance of a stock portfolio.
Q.84. What are the advantages of Java Beans?
Answer. Advantages of using Java Beans are
- Java Beans are portable, platform-independent, and stand for the “write-once, run-anywhere” paradigm.
- The properties, methods, and events of Java beans are controlled when exposed to an application builder tool.
- A Java Bean may register to receive events from other objects. It can also generate events that are sent to other objects.
- Beans use object serialization capabilities for gaining persistence.
Q.85. What are the different properties of a Java Bean?
Answer. There are five types of properties of a Java bean:
- Simple property: This property sets a simple property, a pair of accessors. It employs the getXXX (), and mutator, i.e setXXX(), methods.
- Boolean Property: It is a simple property with boolean values: true or false. It sets the values in the mutator method.
- Indexed property: An indexed property is used when a single property can hold an array of values using the pset propertyName (propertyType[] list) method.
- Bound property: The bound property generates an event when the property is changed.
- Constrained property: The constrained property generates an event when an attempt is made to change its value.
Q.86. What are the steps to be followed while creating a new Bean?
Answer. The steps that must be followed to create a new Bean are:
- Create a directory for the new Bean.
- Create the Java source file(s).
- Compile the source file(s).
- Create a manifest file.
- Generate a JAR file.
- Start the BDK.
- Test
Java Interview Questions and Answers for Experienced
Being an experienced Java professional, the expectations will be a bit high, You have to prepare well, below interview questions will provide an edge over other candidates.
Q.87. Differentiate between Java Bean and ActiveX controls?
Answer.
- Java Beans is a framework used to build applications out of Java components or Beans. ActiveX is a framework for building component documents with ActiveX controls.
- A Bean is written in Java and therefore it has security and cross-platform features of Java. On the other hand, ActiveX controls require a port of Microsoft’s Common Object Model (COM) to be used outside Microsoft windows.
Q.88. What is the difference between fail-fast and fail-safe?
Answer. The fail-safe property of the Iterator works with the clone of the underlying collection and therefore, it is not affected by any modification in the collection. All the collection classes in the java. the concurrent package is fail-safe, while the collection classes in java.util.util are fail-fast. Fail-safe iterators never throw such an exception while fail-fast iterators throw a ConcurrentModificationException.
Q.89. What are some of the best practices related to the Java Collection framework?
Answer. Some best practices related to Java collection framework are:
- Selecting the right type of collection to use, based on the needs of the application is very important for its performance. For example, if we know that the size of the elements and it is fixed we should use an Array, instead of an ArrayList.
- There are some collection classes that enable us to specify their initial capacity. Thus, if we have an estimated number of elements that will be stored, then we can use it to avoid rehashing or resizing.
- We should always use Generics for type-safety, readability, and robustness. Also, we use Generics to avoid the ClassCastException during runtime.
- To avoid the implementation of the hashCode and equals methods for our custom class, we should use immutable classes that are provided by the Java Development Kit (JDK) as a key in a Map.
- Try to write the program in terms of interface not implementation.
Q.90. What is DGC? And how does it work?
Answer. DGC in Java stands for Distributed Garbage Collection. DGC is used by Remote Method Invocation (RMI) for automatic garbage collection. As RMI involves remote object references across Java Virtual Machine, the garbage collection process can be quite difficult. The Distributed garbage Collector uses a reference counting algorithm to provide automatic memory management for remote objects.
Q.91. State the role of stub in RMI?
Answer. A stub in RMI(Remote Method Invocation) acts as a local representative for clients or a proxy for the remote object. Caller invokes or calls a method on the local stub, that executes the method on the remote object. When it invokes the stub’s method, it goes through the below steps:
- It starts a connection with the remote JVM that contains the remote object.
- It then marshals the parameters to the remote JVM.
- It waits till it gets the result of the method invocation and execution.
- It unmarshals the returned value or an exception if the method has not been successfully executed.
- It returns the value to the caller.
Q.92. What is the reflection in Java, and why is it useful?
Answer. Reflection in Java is an API that we can use to examine or modify the behavior of methods, classes, interfaces of the program during the runtime. The required classes for reflection are present under the java.lang.reflect package. We can use reflection to get information about Class, Constructors, and Methods, etc.
Java Reflection is powerful, and it can be advantageous. Java Reflection enables us to inspect classes, interfaces, fields, and methods at runtime. We can do it without knowing the names of the classes, methods, at compile time.
Q.93. What is the difference between multitasking and multithreading?
Answer.
Q.94. What is the tradeoff between using an unordered array versus an ordered array?
Answer. The significant advantage of using an ordered array is that the search time in the ordered array has a time complexity of O(log n). The time complexity of searching in an unordered array is O(n). The drawback of using an ordered array is that the time complexity of insertion operation is O(n). On the other hand, the time complexity of an insertion operation for an unordered array is constant: O(1).
Q.95. Is Java “pass-by-reference” or “pass-by-value”?
Answer. Java is always treated as a pass-by-value. When we pass the value of an object, we are actually passing the reference to it. In Java, all object references are passed by values. It means that a copy of that value will be passed to a method, not the original value.
Q.96. How can you print the content of a multidimensional array in Java?
Answer. We use java.util.Arrays.deepToString(Object[]) method to get a string representation of the content of a multi dimensioned array.
The below example shows how the deepToString() method can print the content of a multidimensional array:
// initializing an object array Object[][] obj = { { "Welcome ", " to " }, { "techvidvan", ".net" } }; System.out.println("The string content of the array is:"); System.out.println(Arrays.deepToString(obj));
Output:
The string representation of the array is:
[[Welcome , to ], [techvidvan, .net]]
Project-related Interview Questions for Experienced
- Explain your project along with all the components
- Explain the Architecture of your Java Project
- Versions of different components used
- Which are the biggest challenges you have faced while working on Java project?
- Which is your biggest achievement in the mentioned Java project?
- Did you stuck in a situation where there was no path ahead, how you handled that case?
- Which is your favorite forum to get help while facing issues?
- How you coordinate with the client in case of any issues?
- How you educate your client for the problems which they are not aware of?
- Do you have any experience in pre-sales?
- What were your roles and responsibilities in last Java project?
- Which design pattern did you follow and why?
- Best practices in Java development that you followed?
Conclusion
In this tutorial of Java interview questions for experienced, we covered the advanced interview questions and answers which are frequently asked by the interviewers. We discussed tons of questions and answers that will quickly help you to crack the Java interview. | https://techvidvan.com/tutorials/java-interview-questions-for-experienced/ | CC-MAIN-2020-45 | refinedweb | 8,907 | 57.37 |
Forums MaxMSP
Hi out there,
Wonder if Max can do this? Can I create some regions in an audio file in BIAS Peak or another editor and have Max read them for eventual use in [coll], [sflist], [sfplay], etc.? I believe AIFFs need a kludge of some sort on the part of the application to read/write region definitions, but is there an agreed-upon way, or can (B)WAV work?
Thanks,
Brian
I have been hoping for something similar as well (along with writing BWAV/other metadata) but I doubt it’s possible without someone interested enough writing an external.
I’ve researched it a bit, reading/writing that stuff seems excruciatingly difficult – though it might be possible with libsoundfile, or using the actual JUCE framework (JUCE does have some built-in functions for reading/writing BWF metadata).
Sadly, the region metadata is, I believe, a proprietary digidesign chunk – I don’t know if BIAS uses the same format as Pro Tools but it seems unlikely.
Glad there’s at least one other person that wants to do things with metadata in Max, though.
Hi,
Yeah, glad to see I’m the only one wondering where this is. I assume you mean *AIFF* metadata is proprietary? I would think the BWAV stuff would be more standard & accessible, but I recall a support where even 2 very popular apps (Pro Tools & Peak) were not being able to get along about where to put it.
Kind of surprising there isn’t anything out there to do it in Max, since this is the one of the basic building-blocks of editing.
But I’m looking (and not capable of writing an external at this point) if someone can point us to it or has some insight in the issues involved to share.
I haven’t looked much into AIFF metadata, actually, though the format is actually based on the same standard as .wav files:
IFF is the base format (I think it’s “Interchange File Format”). .aif is an ‘AIFF’ – audio or Apple interchange file format, and .wav is ‘RIFF’ – resource interchange file format. RIFF is a microsoft format and is actually the same basic file format used for .avi’s as well.
Wikipedia can explain it better than I probably can:
Anyway, each one is comprised of ‘chunks’ which contain various bits of information. Each chunk starts with a four character code defining what the chunk contains – for instance RIFF has a ‘fmt ‘ chunk defining encoding and whatnot, while the actual audio data is stored in a ‘data’ chunk.
Pro Tools stores its region definitions in a proprietary chunk (IE there is no documentation for it, though I’ve found that people who’ve figured it out are willing to share). This chunk is labelled ‘regn’ and contains things like the region name, where the region starts and stops, etc. But it’s much much more complex than that.
Really the only major difference between AIFF and RIFF is endianness. (see )’.
One plus to the way IFF files deal with chunking, though – a well written app will ignore chunks it doesn’t understand, and many chunks (but not all) don’t have to be in any particular order — most programs are perfectly happy with having a BWAV (properly known as ‘bext’) chunk come after the audio data – less of a header and more of a footer.
On 26 nov. 08, at 07:50, Brian Heller wrote:
>
> Hi out there,
> Wonder if Max can do this? Can I create some regions in an audio
> file in BIAS Peak or another editor and have Max read them for
> eventual use in [coll], [sflist], [sfplay], etc.?
You should have a look at my [sfmarkers~] external. Mac only, and for
markers only (although I could add regions in the future).
->
_____________________________
Patrick Delges
Centre de Recherches et de Formation Musicales de Wallonie asbl.
p
_____________________________
Patrick Delges
Quote: Patrick Delges wrote on Wed, 26 November 2008 04:31
—————————————————-
>
>’m not particularly surprised that the chunks are all different – I hadn’t looked at AIFF much at all.
> >.
True but overall I’m still confused about writing/reading chunks in general.
>
> p
>
> _____________________________
> Patrick Delges
>
> Centre de Recherches et de Formation Musicales de Wallonie asbl
>
>
>
—————————————————-
On 26 nov. 08, at 17:22, mushoo wrote:
>> The EBU specs explains very precisely the structure of the bext
>> chunk.
>
> True but overall I’m still confused about writing/reading chunks in
> general.
That’s another problem.
My [sfmarkers~] is not open source (the C code is too disgusting and
badly commented), but here is some JavaScript I coded a couple of
years ago. It may give you some hints. I stopped using Js as soon as I
noticed how slow it was to deal with file i/o, so this code is jut a
test and is probably very buggy…
###save as addmarker.js
// sfmarkers goes JS
// pdelges@radiantslab.com
// users.skynet.be/crfmw/max
// 2005
autowatch = 1;
var theSourceFile;
var theDestinationFile;
var theMarkers;
var lastMarkerID = -1 ; // -1 means there is no marker
function bang()
{
display();
}
function read (filename)
{
theMarkers = new Array; // fresh array
var chunkTag = new Array (4);
var chunkSize = new Array (4);
theSourceFile = new File (filename, “read”, “AIFF”);
post (“nFile:”,theSourceFile.filename, theSourceFile.isopen, “n”);
theSourceFile.position += 12; // skip start of header, should check
file type
if (searchMarkerChunk (theSourceFile))
{
readMarkerChunk (theSourceFile);
// displayMarkers ();
}
else
post (“nopen”);
theSourceFile.close ();
}
searchMarkerChunk.local = 1;
function searchMarkerChunk (theSourceFile)
{
var chunkTag = new Array (4);
var chunkSize = new Array (4);
var chunkTagString = new String ;
var chunckSizeInBytes = 0;
var theEof = theSourceFile.eof;
do { // let’s jump from chunk to chunk
// theSourceFile.position += parseInt(chunckSizeInBytes) ;
chunkTag = theSourceFile.readbytes (4);
chunkSize = theSourceFile.readbytes (4);
chunckSizeInBytes = add4Bytes(chunkSize);
chunkTagString = String.fromCharCode
(chunkTag[0],chunkTag[1],chunkTag[2],chunkTag[3]);
post (“Chunk”, chunkTagString, “Size”, chunckSizeInBytes,”pos:”,
theSourceFile.position, “n”);
}
while (chunkTagString != “MARK” && (theSourceFile.position +=
parseInt(chunckSizeInBytes)) < theEof);
return (chunkTagString == “MARK”);
}
readMarkerChunk.local = 1;
function readMarkerChunk (theSourceFile)
{
var numberOfMarkers;
lastMarkerID = 0;
numberOfMarkers = theSourceFile.readint16 (1);
post (“number of markers:”, numberOfMarkers, “n”);
for (var i =0; i < numberOfMarkers; i++)
{
var aMarker = new Object;
aMarker.id = theSourceFile.readint16 (1);
lastMarkerID = Math.max (lastMarkerID, aMarker.id);
aMarker.position = theSourceFile.readint32 (1);
aMarker.nameLength = parseInt(theSourceFile.readbytes (1));
aMarker.name = (theSourceFile.readchars
(aMarker.nameLength)).join(“”);
if (!(aMarker.nameLength % 2)) // beware useless “space padding” in
Peak
theSourceFile.position++;
theMarkers.push (aMarker);
}
post (“lastID”, lastMarkerID, “n”);
}
function display ()
{
for (var i=0; i < theMarkers.length; i++)
{
var aMarker = theMarkers[i];
for (var j in aMarker)
post (j, aMarker[j], “-”);
post (“n”);
}
}
function addMarker (name, position)
{
var newMarker = new Object;
newMarker.id = ++lastMarkerID;
newMarker.position = position;
newMarker.nameLength = name.length;
newMarker.name = name;
theMarkers.push (newMarker);
function saveFile (newFilename)
{
theSourceFile.open (); // open the last opened file
theDestinationFile = new File (newFilename, “write”, “AIFF”);
if (!theDestinationFile.isopen)
{
post (“Cannot create file”, newFilename, “n”);
return;
}
theDestinationFile.writestring (“FORM”);
var totalSizePostion = theDestinationFile.position;
theDestinationFile.writeint32 (0); // to be updated later
theDestinationFile.writestring (“AIFF”);
// now, let’s copy the chunks.
theSourceFile.position = 12; // go back at start…
do
{ // let’s jump from chunk to chunk
chunckSizeInBytes = add4Bytes(chunkSize);
chunkTagString = String.fromCharCode
(chunkTag[0],chunkTag[1],chunkTag[2],chunkTag[3]);
if (chunkTagString == “MARK”)
theSourceFile.position += parseInt(chunckSizeInBytes); // jump at
the end of the chunk
else { if (chunkTagString == “COMM”)
{
copyChunk (“COMM”, chunckSizeInBytes); // first we copy COMM
post (“COMM “, chunckSizeInBytes, ” “, theSourceFile.position, “.”);
writeNewMarkerChunk (); // then MARK
}
else { if (chunkTagString != “MARK”) // MARK is already done
{
copyChunk (chunkTagString, chunckSizeInBytes);
post (chunkTagString, chunckSizeInBytes, ” “,
theSourceFile.position, “.”);
}
}
}
}
while (theSourceFile.position < theEof);
// update size of FORM
theDestinationFile.position = 4;
theDestinationFile.writeint32 (theDestinationFile.eof – 8);
theDestinationFile.close();
theSourceFile.close();
}
writeNewMarkerChunk.local = 1;
function writeNewMarkerChunk ()
var initialPositionInFile = theDestinationFile.position;
theDestinationFile.writestring (“MARK”);
theDestinationFile.writeint32 (0); // will be changed later
theDestinationFile.writeint16 (theMarkers.length);
for (var i=0; i < theMarkers.length; i++)
{
theDestinationFile.writeint16 (theMarkers[i].id);
theDestinationFile.writeint32 (theMarkers[i].position);
theDestinationFile.writebytes (theMarkers[i].nameLength);
theDestinationFile.writestring (theMarkers[i].name);
if (theDestinationFile.position % 2)
theDestinationFile.writebytes (0); // padding
}
// write size of chunk
var endPosition = theDestinationFile.position;
var chunkSize = endPosition – initialPositionInFile;
theDestinationFile.position = initialPositionInFile + 4; // jump back
theDestinationFile.writeint32 (chunkSize – 8);
theDestinationFile.position = endPosition; // jump to end of chunk
}
copyChunk.local = 1;
function copyChunk(tag,size)
{
var i;
var buffer;
theDestinationFile.writestring (tag);
theDestinationFile.writeint32 (size);
if (size < 32)
theDestinationFile.writebytes (theSourceFile.readbytes (size));
else
for (i=0;i
{
buffer = theSourceFile.readbytes (32);
if (buffer.length)
{
i += buffer.length;
theDestinationFile.writebytes (buffer);
}
else
{
post (“weird!n”);
break;
}
}
}
add4Bytes.local = 1;
function add4Bytes (anArray)
{
var sum = 0;
for (i=0,j=3; i<4; i++,j--) {
sum += anArray[i] * Math.pow(256, j);
// post (sum, anArray[i], Math.pow(256, j),”n”); // remove later!
}
return sum.toFixed(0); // huge difference with float !!!
#### max patch
max v2;
#N vpatcher 683 526 1224 836;
#P window setfont “Sans Serif” 9.;
#P newex 165 273 31 196617 print;
#P number 243 269 35 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P message 180 225 29 196617 open;
#P newex 185 246 62 196617 sfmarkers~;
#P message 81 68 313 196617 saveFile ti_HD_X:/Users/pdelges/Projects/
sfmarkers+~/tmp.aiff;
#P message 53 129 105 196617 addMarker acso 6666;
#P message 28 111 79 196617 displayMarkers;
#P message 83 162 83 196617 read nothing.aiff;
#P newex 19 37 99 196617 bgcolor 255 227 68;
#P message 99 204 294 196617 read ti_HD_X:/Users/pdelges/Projects/
sfmarkers+~/file.aiff;
#P message 211 161 310 196617 ti_HD_X:/Users/pdelges/Projects/sfmarkers
+~/file.aiff;
#P message 93 184 65 196617 read file.aiff;
#P newex 59 244 69 196617 js addmarker;
#P connect 3 0 0 0;
#P connect 1 0 0 0;
#P connect 5 0 0 0;
#P connect 6 0 0 0;
#P connect 7 0 0 0;
#P connect 8 0 0 0;
#P connect 9 0 12 0;
#P connect 10 0 9 0;
#P connect 9 1 11 0;
#P pop;
_____________________________
Patrick Delges
I’m too tired to read the whole thing right now, and I’m trying to avoid thinking about this project for the weekend, but I can tell you this:
That code will be fantastically useful, and thank you. Just having something that describes the broad strokes to the process is awesome. (I’ve never dealt with Js, but I’m sure it will make sense, seems very readable, well commented.)
You must be logged in to reply to this topic.
C74 RSS Feed | © Copyright Cycling '74 | http://cycling74.com/forums/topic/reading-region-data-from-aiffbwav/ | CC-MAIN-2014-15 | refinedweb | 1,736 | 57.67 |
Hey guys!
I have coded an applet that will show something like this (mines a different color):
The code below does WORK. The mouseListener part doesn't.
public class ColoredSquares extends JApplet implements MouseListener { Random rand = new Random(); private MouseEvent evt = null; private boolean isClicked; // declare instance variables // initialize instance variable with an array of Color objects Color [][] a; Color randC; int x, y; int myClickX; int myClickY; // generates a color public Color Color(Color randC) { // huge method, omitted to make code shorter } public void init () { // sets window size of the applet viewer setSize(800,800); // fill entire applet with background color setBackground(Color.BLACK); // fill array with 8x8 color objects a = new Color[8][8]; for (int x = 0; x < a.length; x++) for (int y = 0; y < a.length; y++) a[x][y] = Color(randC); // implement MouseListener interface this.addMouseListener(this); <--- I think this line is WRONG } // end init method public void paint(Graphics g) { // rectangle size is determined by applet size // leave space at the bottom to draw a string int rowHeight = (getHeight() / 8) - 5; int colWidth = (getWidth() / 8); // loop through each array object and paint a rectangle for (int row = 0; row < a.length; row++) { for (int col = 0; col < a.length; col++) { g.setColor(a[row][col]); g.fillRect(col*colWidth, row*rowHeight, colWidth, rowHeight ); } } // if mouseClicked is true, then change the color of the square that was clicked on if (evt != null && isClicked == true) { // if a square is clicked on // the square turns red g.setColor(Color.red); int col = 0; int row = 0; g.fillRect(col*colWidth, row*rowHeight, colWidth, rowHeight ); } } // end paint method public void mouseClicked(MouseEvent evt) { // save the evt pointer in my global variable this.evt = evt; // stores the current click of the mouse, X and Y position myClickX = evt.getX(); myClickY = evt.getY(); NOT sure what to do HERE V V below V V // if an array element is clicked // return isClicked = true // else // return isClicked = false repaint(); } // end mouseClicked method public void mouseExited(MouseEvent evt) {} public void mouseEntered(MouseEvent evt) {} public void mouseReleased(MouseEvent evt) {} public void mousePressed(MouseEvent evt) {} } // end class
What I want to happen is when I click on one of the squares in the applet, I want to change the color of the array element corresponding to that square to RED.
I don't believe any other method is to be added-- use the methods already stated. No JPanel recommendations please. I'm not supposed to use that.
I know that in my mouseListener I want to find WHICH index of the array was clicked. Then return that the click is true. In my paint, I want "if click is true, then set color to red, and paint that index". Or something like that...
Please at least give me an example of a mouseListener with a multidimensional array. >_<
Any help would be much appreciated. | http://www.javaprogrammingforums.com/java-applets/25191-mouseclicked-color-objects-arrays.html | CC-MAIN-2014-15 | refinedweb | 481 | 70.63 |
Question: Refer to Problem 12 11 If OhYes was a load fund
Refer to Problem 12.11. If OhYes was a load fund with a 2% front-end load, what would be the HPR?
Answer to relevant QuestionsYou are considering the purchase of shares in a closed-end mutual fund. The NAV is equal to $22.50 and the latest close is $20.00. Is this fund trading at a premium or a discount? How big is the premium or discount? Briefly discuss holding period return (HPR) and yield as measures of investment return. Are they equivalent? Explain. Briefly describe each of the following return measures for assessing portfolio performance and explain how they are used. a. Sharpe's measure b. Treynor's measure c. Jensen's measure (Jensen's alpha) Describe the 2 items an investor should consider before reaching a decision to sell an investment. Niki Malone’s portfolio earned a return of 11.8% during the year just ended. The portfolio’s standard deviation of return was 14.1%. The risk-free rate is currently 6.2%. During the year, the return on the market ...
Post your question | http://www.solutioninn.com/refer-to-problem-1211-if-ohyes-was-a-load-fund | CC-MAIN-2017-22 | refinedweb | 191 | 70.5 |
TextAPI is a library and accompanying tool that allows conversion between binary shared object stubs and textual counterparts. The motivations and uses cases for this are explained thoroughly in the llvm-dev proposal [1]. This initial commit proposes a potential structure for the TAPI library, also including support for reading/writing text-based ELF stubs (.tbe) in addition to preliminary support for reading binary ELF files. The goal for this patch is to ensure the project architecture appropriately welcomes integration of Mach-O stubbing from Apple's TAPI [2].
Added:
[1]
[2]
Not looked at common practice, but I noticed that according to the docs "*- C++ -*" is unnecessary in .cpp files.
If you use llvm::errc, you don't need the std::make_error_code here.
(uint64_t)0u looks weird to me. Do you need to be explicit about the type here?
I'm guessing this is some magic in the YAML code? I don't really understand it, if I'm honest.
Maybe to make this look slightly less hideous this should be compressed into one line:
EXPECT_STREQ(Line1.str().data(), Line2.str().data())
Move this inline.
Okay, makes sense.
Aside: I reckon StringRef should be null terminated - it's a reference to a string, either a C-style literal one (complete with '\0') or a std::string (which is null-terminated in modern C++).
Thanks for being patient with me, I'm really appreciating your feedback!
Yes, otherwise it's ambiguous. I believe the the u can be safely dropped.
Yes. YAML can't tell the difference between ELFArch and uint16_t since ELFArch is just a typedef based on uint16_t. uint16_t has a default YAML implementation, so there must be a LLVM_YAML_STRONG_TYPEDEF to define a distinct type with overridden implementation. I avoided using the strong YAML typedef in ELFStub because things get rather messy if you try to use it for things other than YAML. Hence, the need for the reference cast here. It forces our standard type to be treated as a distinct YAML type.
Sorry, that slipped through.
Normally it would be, but as I understand it's not guaranteed to be. Since it's just a reference to a character array, it's not null terminated if the reference is to a substring that isn't null terminated. Since I'm splitting on '\n', the last character of the substring isn't a null terminator. I updated the code comment to correct my wording.
I've got a few more nits, but otherwise I'm happy with this. Somebody else should definitely take a look too though.
Don't use else after an if that returns. See.
Don't use else after an if that always returns.
createStringError is also overloaded to work with printf-style formatting, so you could do all of this simply in one line, with something like the following:
return createStringError(errc::invalid_argument,
"ELF incompatible with file type `%s`\n",
fileTypeToString(NewType).c_str());
Don't use else after return.
No need for the braces here.
Trailing full-stop needed on this comment.
Great, LGTM, but please get somebody else to sign off too.
Some drive by comments again. Be good to get @ributzka to take a final look.
This file could use some more comments in general.
Elaborate on what needs to be done to fix the TODO please.
I'd like to try to avoid "v1" style naming for things. Any hope of an explicit version field instead?
Can you give a clear outline as to what the ELFInterfaceFile class does that ELFStub does not? At the moment it seems to merely be an extremely verbose way of keeping track of FileType manually. In llvm-objcopy we have to keep track of this because there is an expectation that the output type match the input type but I don't think you have that constraint here. One output type is assumed if none is given, .tbe.
I think a better way to handle the different formats is to have a ReaderWriter class for each and methods format. That can then provide the description, and those can be registered in the command line tool in an appropriate fashion. The user should be able to know which type of object they read in by checking which reader/writer they used successfully.
What is the intended semantics of a interface without a stub?
Can this be an enum class or "scoped enum"? Same for all enums.
We decided offline that we needed to keep track of weakness correct?
offline we decided we needed to keep track of dt_needed as well, right? I'll accept a TODO here for that.
Where ever you use this, you can most likely replace it with a lambda or a direct comparison.
I don't think using a raw_ostream is what we want here. It's probably fine but it would be better to use some kind of abstraction over Buffers that allowed both a raw_ostream to be used under the hood *and* a MemoryBuffer or FileBuffer. Importantly you avoid a copy of the file by using a FileBuffer on most systems.
The reality is that you can't set to any non-single value.
An Invalid FileType doesn't make sense. Why do you need this?
All should probably not be an enum. The FileType enum is currently mixing up the notion of a value with a notion of a set IMO.
Define move constructor as well.
Use '=' instead of loop and push_back.
Removed:
Changed:
Discussed offline, removed this since it provides abstraction that isn't clear will be necessary.
Noted. It's unclear what that will look like right now, so I'll keep this as-is until we reach the point that this becomes a concern.
I don't think the tapi namespace is required in general for this library, but that is just a personal preference.
Thanks for changing the name. This will make it easier to rebase my patch.
Why do you use a DenseMap here? Wouldn't be a switch enough?
StringSwitch from ADT would be an alternative here too.
Do you really depend on Object? This will cause a circular dependency when libObject starts using TextAPI to read TBD/TBE files.
This is not Support anymore ;-)
LGTM with nits. Please make sure Jurgen and you agree on the namespace and folder scheme first but other than than this LGTM.
Please add a TODO for symbol versioning.
Oh and you'll need to remove the dependency on libObject as discussed. I pointed out that only changes I think you'll need to make but you might find a few more.
Use BinaryFormat
Make sure to remove this as well.
And this.
Instead of providing this function, you could make Symbolss type to std::set which is a sorted set.
const VersionTuple TBEVersionCurrent(1,0);
is more conventional.
Let's use fully qualified name so that it doesn't look like we have ::tapi namespace.
using namespace llvm::tapi::elf;
nit: I believe you can omit -> bool.
StringRef Buf(Data);
Unfortunately std::set doesn't allow in-place modification of elements, so it won't work with YAML. I'll look into alternative options.
If a set is sorted by Name, and if you don't mutate that string, you are fine, no? You can safely do in-place modification to other members.
Elements of a std::set are const by default. This requires either using const_cast or declaring some members as mutable. I feel there's a cleaner way to do this.
I realized that since I'm using CustomMappingTraits I can populate an ELFSymbol before inserting it into the set.
I do not have any further comments. LGTM
We are seeing a failure of YAMLWritesNoTBESyms on the WebAssembly waterfall due to this change:
FAIL: LLVM-Unit :: TextAPI/./TapiTests/ElfYamlTextAPI.YAMLWritesNoTBESyms (3237 of 44396)
******************** TEST 'LLVM-Unit :: TextAPI/./TapiTests/ElfYamlTextAPI.YAMLWritesNoTBESyms' FAILED ********************
Note: Google Test filter = ElfYamlTextAPI.YAMLWritesNoTBESyms
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from ElfYamlTextAPI
[ RUN ] ElfYamlTextAPI.YAMLWritesNoTBESyms
/b/build/slave/linux/build/src/src/work/llvm/unittests/TextAPI/ELFYAMLTest.cpp:42: Failure
Expected: Line1.str().data()
Which is: "NeededLibs: "
To be equal to: Line2.str().data()
Which is: "NeededLibs: [ libc.so, libfoo.so, libbar.so ]"
/b/build/slave/linux/build/src/src/work/llvm/unittests/TextAPI/ELFYAMLTest.cpp:42: Failure
Expected: Line1.str().data()
Which is: " - libc.so"
To be equal to: Line2.str().data()
Which is: "Symbols: {}"
/b/build/slave/linux/build/src/src/work/llvm/unittests/TextAPI/ELFYAMLTest.cpp:42: Failure
Expected: Line1.str().data()
Which is: " - libfoo.so"
To be equal to: Line2.str().data()
Which is: "..."
We suspect that its due to fact that LLVM_BUILD_LLVM_DYLIB and LLVM_LINK_LLVM_DYLIB are set on this builder.
Its seems that with the dylib use case the SequenceElementTraits from include/llvm/Support/YAMLTraits.h are in effect and we get:
NeededLibs:
- libc.so
- libfoo.so
- libbar.so
Where as most builds get (and the test expects):
NeededLibs: [ libc.so, libfoo.so, libbar.so ]
Attempted fix in | https://reviews.llvm.org/D53051?id=175727 | CC-MAIN-2020-34 | refinedweb | 1,510 | 68.26 |
Levenshtein as a Fuzzy Match in Python
Previously, I have written about using the Levenshtein algorithm as a way to produce a fuzzy match between two strings in Java, and posted a Ruby implementation of Levenshtein to Guthub. Now it's Python's turn, but thankfully there is an excellent library for Levenshtein already written for Python. The only missing piece was that I wanted to have the result as a percentage instead of a count of edits between the two strings.
Before getting started using the python-Levenshtein, you'll need to install the dependency. For Ubuntu, this is accomplished with
sudo apt-get install python-Levenshtein. After the dependency is resolved, add
import Levenshtein to your Python code and then it's off to the races matching strings with Levenshtein.
The translation from number of edits to a percentage is the same I used in the Java version () and appears on line six of the full listing below, and is pseudo-code:
1.0 ( distance(str1, str2) / max(len(str1), len(str2) )
The data set used is the same as the Distribution of JSON Values With Python from the GunStockMarket and is available here.
Saving the code below into a file named levenshtein_example.py and the sample data into a file named GunStockMarket.json, it may be invoked with:
python levenshtein_example.py < GunStockMarketSample.json ¦ more
The output should match (of course there is many more lines output if the program is allowed to execute for the entire data set):
0.82: FS: Springfield Armory M1A 308::FS: Springfield Armory M1A Loaded 0.81: FS: Springfield Armory M1A 308::FS: Springfield Armory M1 GARAND 0.83: FS: Springfield Armory M1A 308::SPringfield Armory M1A 308 0.88: FS: Springfield Armory M1A 308::FS: Springfield Armory M1A #9826 0.84: FS: Springfield Armory M1A 308::FS: Springfield Armory M1A #9222 0.81: FS/FT: M1 Garand::FS: M1 Garand 0.81: FS/FT: M1 Garand::FS: M1 Garand 0.81: FS/FT: M1 Garand::FS: M1 Garand 0.80: FS/FT: M1 Garand::FS/FT: H&R M1 Garand 0.81: FS/FT: M1 Garand::FS: M1 Garand 0.75: FS/FT: M1 Garand::FS: M1 garand 0.75: FS/FT: M1 Garand::FS: M1 garand 0.77: FS: Mosin Nagant M91/30::FS: Mosin Nagant M91/30 Soviet 0.81: FS: Mosin Nagant M91/30::FS/FT: mosin nagant M91/30 0.96: FS: Mosin Nagant M91/30::FS: Mosin nagant M91/30 0.96: FS: Mosin Nagant M91/30::FS: Mosin nagant M91/30 0.76: Soviet Mosin Nagant M91/30 762x54R::Mosin Nagant M91/30 7.62x54R 0.76: Soviet Mosin Nagant M91/30 762x54R::Mosin Nagant M91/30 7.62x54R 0.77: FS/FT: Mosin Nagant M38::FS/FT: mosin nagant M91/30
The Python code is:
import sys import json import Levenshtein def compare(first, second): return 1.0 - (float(Levenshtein.distance(first, second)) / float(max(len(first), len(second)))) titles = [] for line in sys.stdin: event = json.loads(line) titles.append(event["title"].encode('utf-8')) for title1 in titles: for title2 in titles: result = compare(title1, title2) if result >= 0.75 and title1 <> title2: print ("%0.2f" % result) + ": " + title1 + "::" + title2 | http://georgestragand.com/pythonFuzzyLevenshtein.html | CC-MAIN-2019-22 | refinedweb | 537 | 67.65 |
Agenda
See also: IRC log
<trackbot> Date: 17 June 2010
<scribe> meeting: TAG
<scribe> scribenick: Ashok
<scribe> scribe: Ashok Malhotra
Noah: I will be available next week but not the following 2 weeks. So, I want to cancel the mtgs for July 1 and July 8
... I will send mail re. summer calls
... HT can you scribe next week?
ht: Yes, subject to usual constraints
<noah> Henry will scribe next week
Noah: Larry, we are still waiting for minutes from May 27, correct?
Larry: Yes
Noah: f2f minutes need to be approved. Let's do it next week.
<jar> Still looking at f2f minutes
Noah: I'm working with Raman on hosting October mtg
<noah> NM: Summarizes Raman's concerns about F2F scheduling
<noah> HT: Endorses sense of Noah's reply... F2F meetings are useful, this one is good
Larry: I updated the note a bit.
... The docs we reference are a bit incomplete
... I got a positive review from Ned Freed and Graham Klyne.
<noah> HT: The one we reviewed at the F2F was the blog post ()
Larry: more to be said on registering MIME types
... affects a lot of issues
... version indicators, sniffing and whether metadata is authoritative
... language of this doc does not talk about licences or authority
... need a clearer statement of what MIME means and how to use it on the Web
... 2 opportunities
... 1) work with Ned and Graham to update MIME doc and MIME registration procedures
... would be really helpful
... 2) proposal for new IETF WG -- HTTP Application Security to be discussed in mtg late July
<ht> HASMAT
<Yves> I gave a pointer to the HASMAT BOF at next IETF during the f2f
Noah: Thank you, Larry ... very helpful
<Zakim> ht, you wanted to mention override
HT: Very valuable... uptake from IETF very encouraging
... need to add section on overriding media-types
<timbl__> Examples?
<timbl__> VXML? HTML5?
<jar> Link: header has 'type' feature relating to Content-type... but it means "you might expect to get something of this type" I think (should check)
<jar> so maybe not a problem
HT: regardless of response you get use this media-type or if there is no media-type specified use this media-type
<ht> XInclude, xml-stylesheet PI, XProc
<noah> From RFC 2616:
<noah> If and only if the media type is not given by a Content-Type field, the recipient MAY attempt to guess the media type via inspection of its content and/or the name extension(s) of the URI used to identify the resource.
Larry: There were some concrete suggestions for spec to be updated at the end of the BOF
<Yves> see also
Larry: If you have out-of-band info about media-type you shd use that in an Accept Header
Noah: Would you update 2616 ?
<ht> Guys, remember that HTTP-bis is underway and it _is_ updating 2616
<Zakim> ht, you wanted to explain the XInclude example
HT: I will explain the XInclude case and I hope you will agree it does the right thing
<jar> ( sec. 5.4 "this is only a hint" )
HT: I'm ignoring media type and doing something different because that's what the includer wants
Tim: When you break the level of abstraction you are on dangerous ground and user shd be warned
Larry: If you say it says it's XML but I will treat as plain text
<Zakim> ht, you wanted to say I hear Tim agreeing with me that doing this has to be called out for special discussion
ht: I agree with Tim ... a new doc about media-types shd say something about this kind of behaviour and it shd say what Tim said ... be careful when breaking levels of abstraction
... The efficient XML encoding is not an encoding ... tells you how to build an infoset but not a content encoding
<Yves> that's why EXI can't be used as a Transfer-Encoding (and it is marked that way in the IANA registry)
<Zakim> noah, you wanted to say: I think Larry's example exposes why we treated doing EXI as a content-encoding
Larry: You also want to think abt overriding the character encoding
<ht>
<Zakim> timbl__, you wanted to talk about breaking levels for example accessing the character sequence
Noah: It would be very different for XInclude to say: ignore the content type, use this as opposed to check if media type is within this bounde range and then play the level-breaking tricks
<noah> Noah: specifically, the XML media type says that what you get >is< characters, so treating your information as such isn't ignoring the media type
Tim: Gives examples of level-breaking
Noah: Let's talk abt how to cooperate with IETF
Larry: My time is limited... can someone else do this?
Noah: If we pick this back up in Sept will we have lost an opportunity?
<noah> Maastricht is late July?
Larry: Would be good to weigh in on HASMAT
<noah> YL: Likely scope of HASMAT work is much broader than media types & sniffing. I will be there and can convey messages.
Larry: Please make suggestion
YL: I don't see a lot of interest in IETF to work on the specs ... I think it's very important
Noah: Shd Yves take an action to gather a TAG position on how to handle this and then represent us in Maastricht?
... Larry can you work on this the next week?
<masinter> yes
<noah> . ACTION Yves to coordinate TAG positions on media type related work with IETF, and to represent TAG at IETF meetings in Mastricht
<noah> ACTION Yves to coordinate TAG positions on media type related work with IETF, and to represent TAG at IETF meetings in Mastricht Due: 2010-07-20
<trackbot> Created ACTION-447 - Coordinate TAG positions on media type related work with IETF, and to represent TAG at IETF meetings in Mastricht Due: 2010-07-20 [on Yves Lafon - due 2010-06-24].
<noah> ACTION-447 Due 2010-07-20
<trackbot> ACTION-447 Coordinate TAG positions on media type related work with IETF, and to represent TAG at IETF meetings in Mastricht Due: 2010-07-20 due date now 2010-07-20
<noah> From 9 June 2010 F2F discussion: RESOLUTION: HST to send the contents of to ietf-http-wg, cced to public-html and www-tag, with minor fixes given above
Noah: Originates from Action-387
<noah> Response from Mark Nottingham:
<noah> HT: I still prefer what Yves wrote, but they've responded, and we should thank them and move on.
HT: ... we got some of what we wanted
<noah> From the response from Mark:
<noah> In practice, currently-deployed servers sometime provide a Content-Type header which does not correctly convey the intended interpretation of the content sent, with the result that some clients will examine the response body's content and override the specified type.
Yves: The new text was more in line with ... explains motivation for changes ... security breaches
Noah: I have trouble with -- with the result that some clients will examine the response body's content and override the specified type.
... perhaps this is minor
HT: Not worth bothering with this
Noah: Henry can you send thanks?
HT: Yes
<noah> close ACTION-387
<trackbot> ACTION-387 Review JK/NM's stuff on sniffing, authoritative metadata, self-describing web, incl. closed
Noah: Do you want to discuss this? Next step in announcing IRI everywhere
Larry: Some movement in IETF in unifying IRI
... I cannot separate the politics from the technical substance
<noah> I want to be sure we're getting the technical details of what Larry is saying...do we need to pause for the scribe?
<masinter> there are several technical issues that need to be worked somewhere
larry: Several technical issues
<masinter> the most important is around IDNA and domain names
<masinter> because IDNA doesn't use hex-encoded UTF8 but rahter punicode
<masinter> Internationalization of Domain Names for Applications
<masinter> second around BIDI
Noah: What shd be TAG's role?
... Shall we close the action?
<masinter> suggest closing this action, leaving issue open
<noah> OK
<jar> Larry's plan for closing the issue:
Noah: I will not schedule discussion unless someone asks?
<ht> close ACTION-370
<trackbot> ACTION-370 HST to send a revised-as-amended version of to the HTTP bis list on behalf of the TAG closed
Raman: Has anyone else read the msg from Adam Barth?
... it's a question of layering of specs
... URI spec tells you how a URI is written and interpreted
... if you think abt which spec goes first
... how the byte stream is interpreted
<masinter> hard to separate logistics issues from technical issues, but the technical issues need to be worked, independently of where. IDNA, BIDI presentation, and several others.
Noah: Closing the action is consistent with your advice
Raman: I suggest the TAG read Adam Barth's note and then we can discuss
<noah> Adam Barth's message:
Noah: I will not schedule discussion unless someone asks
<masinter> Adam's message focuses on the logistical approach, but I'm afraid that the layering isn't actually appropriate, doesn't match how anyone implements IRIs today
HT: Pl. read Adam's msg and let's discuss next week
<noah> . ACTION: Noah to schedule discussion of on 24 June
<noah> ACTION: Noah to schedule discussion of on 24 June [recorded in]
<trackbot> Created ACTION-448 - Schedule discussion of on 24 June [on Noah Mendelsohn - due 2010-06-24].
<noah> close ACTION-411
<trackbot> ACTION-411 Take the next step on announcing IRIEverywhere closed
Raman: I will send confirmation of f2f to list
Noah: I would like to close ACTION 403
<noah> I think Dan's note resolves it.
<noah> close ACTION-403
<trackbot> ACTION-403 Ensure that TAG responds to Murata Makoto's request for RelaxNG Schemas for XHTML (self-assigned) closed
<masinter> action-425?
<trackbot> ACTION-425 -- Larry Masinter to draft updated MIME finding(s), with help from DanA, based on www-tag discussion -- due 2010-05-31 -- PENDINGREVIEW
<trackbot>
Noah: Larry shd we close 425?
<noah> close ACTION-425
<trackbot> ACTION-425 Draft updated MIME finding(s), with help from DanA, based on www-tag discussion closed
<masinter> the MIME update should cover more about fragment identifiers
<noah> Noah has permission to close 441 after sending note.
Noah: Sent note re. frag-id. Got supportive reply from Jonathan. If I don't hear I will send note in a couple of days.
<Yves> I agree with Larry that mime reg form is missing fragment handling
<jar> issue-1?
<trackbot> ISSUE-1 -- Should W3C WGs define their own media types? -- pending review
<trackbot>
Larry: Wondering if MIME-type stuff shd be opened as an issue
<noah> ISSUE-9?
<trackbot> ISSUE-9 -- Why does the Web use mime types and not URIs? -- closed
<trackbot>
<noah> ISSUE-3?
<trackbot> ISSUE-3 -- Relationship between media types and namespaces? -- closed
<trackbot>
<noah> w3cMediaType-1 : Should W3C WGs define their own media types?
Noah: We have a couple of closed issues that are related ... see above
<noah> ISSUE-1?
<trackbot> ISSUE-1 -- Should W3C WGs define their own media types? -- pending review
<trackbot>
Noah: Larry do you still think we need a new issue?
jar: Looks like a new issue to me
<jar> . OK, but *how* should W3C WGs define their own media types?
Noah: Text for new issue?
<noah> Mime & Web?
Larry: MIME and Web
Noah: Actually, MIME-types?
<noah> Use of MIME with the Web?
<noah> for the Web?
<noah> ISSUE: The role of MIME in the Web Architecture
<trackbot> Created ISSUE-66 - The role of MIME in the Web Architecture ; please complete additional details at .
<noah> Shortname: mimeAndWeb-55
<noah> Make sure use of MIME capabilities, typing, etc. in Web architecture is appropriate, and coordinate with IETF.
<noah>
<noah> ISSUE-66?
<trackbot> ISSUE-66 -- The role of MIME in the Web Architecture -- open
<trackbot>
Noah: We will have a call next week
... and after that not for two weeks | http://www.w3.org/2001/tag/2010/06/17-minutes | CC-MAIN-2014-42 | refinedweb | 2,002 | 59.84 |
Microsoft Open Technologies, Inc. (MS Open Tech), has released a new update to the Azure Toolkit for Eclipse with Java. Check out the Details on msopentech.com.
Have a look at the post on msopentech.com for full details
In January of last year, Microsoft Open Technologies, Inc. (MS Open Tech) launched VM Depot- a community-managed repository of open source virtual machine images for deployment on Microsoft Azure. The vision we articulated for this repository is that it be a place where "the community can build, deploy and share their favorite Linux configuration, create custom open source stacks, work with others and build new architectures for the cloud that leverage the openness and flexibility of the Windows Azure platform."
Today, I am happy to report that VM Depot crossed the threshold of 1,000 images!
MS Open Tech's success in attracting image publishers to VM Depot has encouraged us to turn our attention to making the site and its images even easier to use. Whatever it is you are looking for, there is a good change that you will find it on VM Depot and thus can deploy it to Azure. Earlier this week, we announced a new search feature to help you more quickly hone in on the right image for your needs. We will continue to work on improving integration with the Azure Management Portal to make it easier to deploy VM Depot images using a web browser.
For those who prefer to use the command line for deployment and management, we support that, too. We continue to make improvements to the Microsoft Azure Node.js SDK, as we wish to ensure that developers will continue to have full access to VM Depot and Azure regardless of which operating system they are working on.
For those just getting started with VM Depot, we have created a Site Walkthrough. Keep an eye out for a set of materials and documentation in the coming weeks which will help you make the most of this repository as it continues to expand.
And let us know if there are specific areas that need more attention or support..
Microsoft)
Oracle Self Service Kit
How to use Oracle images on Windows Azure
Pricing Details
We’re excited that we’re providing this important service with Oracle’s partnership. We’d like to hear what you have to say! Please share comments here..
Basic search functionality has not changed. You can still type into the search box on VM Depot and hit return. The results will include all virtual machines that contain your search term in their description, title, tags or other common fields.,!
As stated in the Blink 2014 goals, the Blink team considers Pointer Events as one of their priorities to improve the mobile Web platform experience. And it has recently shown evidence of their commitment to Pointer Events by checking in touch-action functionality into the code base and making it available through an experimental flag.
Likewise, the Mozilla Firefox team has approved a patch submitted by Nick Lebedev from Akvelon that implements the same functionality. This is the result of months of great work in the OSS community where multiple engineers from different companies contributed their design ideas and engineering insights...
We:!
Today,.
Microsoft...
Today!.!
Jaspersoft is an open source Java-based Business Intelligence suite that has experience in the cloud – to date, they have over 500 customers running Cloud solutions. Jaspersoft has put together this great video of the JasperReports server visualizing a million rows of GitHub archive data to show how Windows Azure, NoSQL, and open source Java BI solutions can work together.
This video shows Jaspersoft accessing MongoDB via their native connector on Windows Azure. The Jaspersoft native connector provides near real-time connectivity, providing great performance for even complex visualizations, as you’ll see in this video:
Jaspersoft runs on both Linux and Windows, meaning it can also run on either the Windows or Linux platform on Windows Azure as well. Jaspersoft also has native connectors for several other data sources, including Microsoft SQL Server as well as a variety of other relational data platforms and formats, and Hadoop. Because Jaspersoft is written in Java, it can be a key part of any solution that you create using our other Windows Azure – enabled Java Tools as well.
Do you use Jaspersoft on Windows Azure, or have a Java solution running on Windows Azure that you’d like to share? Let us know in the comments and maybe we’ll highlight your solution in a future post!
From the ActorFx team:
Brian Grunkemeyer, Senior Software Engineer, Microsoft Open Technologies Hub
Joe Hoag, Senior Software Engineer, Microsoft Open Technologies Hub
Today we’d like to talk about yet another ActorFx example that shows the flexibility and tests the scalability of the ActorFx Framework.
ActorFx provides an open source, non-prescriptive, language-independent model of dynamic distributed objects for building highly available data structures and other logical entities via a standardized framework and infrastructure. ActorFx is based on the idea of the mathematical Actor Model for cloud computing.
The example is a social media example called Fakebook that demonstrates a simple way to apply actors to large-scale problems by deconstructing them into small units. This demo highlights ActorFx’s ability to dramatically simplify cloud computing by showing you that surprisingly few concepts are necessary to build a scalable and interesting application with a relatively simple programming model. The example is part of the latest build on the ActorFx CodePlex site.
The ActorFx Framework lets developers easily build objects for the cloud, called actors. These actors allow developers to send messages, respond to messages, and create other actors. Our Actor Framework adds in a publish/subscribe mechanism for events as well. Using these, we have built the start of a Base Class Library for the Cloud, including CloudList<T> and CloudDictionary<TKey, TValue> modelled after .NET collections. We’ve harnessed the ActorFx Framework’s pub/sub mechanism to build an ObservableCloudList<T> that is useful for data binding collections to UI controls. These primitives are sufficient to build a useful system we call Fakebook, a sample social media application.
Social media apps have many properties that map quite well to actors. Most data storage is essentially embarrassingly parallel – users have their own profile, their own photos, their own list of friends, etc. Whenever operations span multiple users, such as making friends or posting news items, this requires calling into other people’s objects or publishing messages to alert them to new information. All of these map well to the Actor Framework’s concept of actors, including message passing between actors and our publish/subscribe event pattern.
Let’s decompose a social network into its constituent parts. At the core, a social network provides a representation of a person’s profile, their friends, their photos, and a newsfeed. All of this is tied together using a user name as an identifier. Other features could be added on top of this, such as graph search, a nice timeline view, the ability to play games, or even advertising. However, those are out of scope for our Fakebook sample, which is demonstrating that the core parts could be built using actors.
See Figure 1 for a clear breakdown of the UI into constituent parts.
Figure 1 - Fakebook UI Design
The core idea of Fakebook is to model a user as a set of actors. In our model, a Fakebook user is represented by the following actors:
· A person actor that tracks their name and all basic profile information (like gender, birthday, residence, etc).
· A friends list, which is a CloudList<String> containing the account name of all friends.
· A photos dictionary, which is a CloudStringDictionary<PictureInfo>, mapping image name to a type representing a picture and its interesting characteristics (such as author, licensing information, etc).
· A set of news posts made by the author, as a CloudList<String>.
· A newsfeed actor that aggregates posts from all of a person’s friend’s news feeds.
Interaction between actors can be done in one of two ways: actor-to-actor method calls (either via IActorProxy’s Request or Command methods), or via events sent via our publish/subscribe mechanism. For Fakebook, we use a mix of approaches. When constructing other actors or adding friends, we use IActorProxy’s Request.
The newsfeed actor works entirely based on publish/subscribe events. The newsfeed actor is subscribed to all of the person’s friends’ news posts. Similarly, the newsfeed actor is also subscribed to the person’s friends list, so whenever a friend is added or removed (unfriended), the newsfeed actor knows to subscribe or unsubscribe as appropriate. For interacting with the client, we also use publish/subscribe messages. The client can data-bind an ObservableCloudList<T> to a UI control like a ListBox. This allows users to seamlessly push updates from the actor in the cloud to the client-side UI. See Figure 2 for an example showing how publish/subscribe messages are sent between actors as well as from actors to the client UI.
Figure 2 - Fakebook Publish/Subscribe messages: Actor-to-Actor and Actor-to-Client
We’ve extended support for LINQ queries to allow transforming observable collections from one type to another. This is very useful for bridging from cloud-based data types (such as account names stored as strings) to rich client-side view model types (such as a FakebookPerson object). LINQ’s Select can be used to change an ObservableCloudList<String> into an IEnumerable<FakebookPerson>, and we’ve done the engineering work to preserve the event stream of updates as well. Consider the following code:
privateObservableCloudList<String> _friends = …;
// Here, we need an ObservableCloudList<String> converted to an observable sequence
// that also passes through INotifyCollectionChanged events. When exploring how to
// build this, we settled on using a Select LINQ operator for our type. Due to
// some problems with C# method binding rules, we had to add an interface
// to express the right type information. But, this works.
var people = _friends.Select(accountName =>
newFakebookPerson(App.FabricAddress, accountName, App.ConnectThroughGateway));
Dispatcher.Invoke(() => FriendsListBox.ItemsSource = people);
By using client-side view model types, a client application’s data binding code can easily access properties such as a person’s first & last names as well as profile picture. This information is not as easily obtainable via just the user’s account name, so a transformation like this helps keep the level of abstraction high when writing client apps while preserving the dynamic nature of the underlying data.
Fakebook’s NewsFeed and Person actors were implemented in a curious way. For our collection actors, we built the application then used a complex, actor-specific deployment directory structure with many supporting files in place to describe the actor. This has the advantage that our infrastructure knows what a list actor is, in great detail. However, for Fakebook a more flexible approach was employed. The NewsFeed and Person actors both define .NET assemblies with a type each that contains a number of actor methods. But we use our empty actor implementation to deploy these assemblies to each of them. This makes it easy to update the implementation, and is a bit easier to manage.
For the Actor Runtime’s infrastructure, we create a service consisting of the empty actor app, called “fabric:/actor/fakebook”. We then create instances of that service with names like “fabric:/actor/fakebook/People” and “fabric:/actor/fakebook/Newsfeed”. Using a partitioning scheme, we then create individual IActorStates for individual accounts. So an account name like “Bob.Smith” is used as a partition key for the People and Newsfeed actors. Fakebook makes use of similarly partitioned list and dictionary actors as well.
The Actor Framework provides for high availability by allowing multiple replicas of an actor to run within the cloud. If a machine holding one replica crashes, then we already have other replicas to choose from to continue running the service. The number of replicas is configurable, and for Fakebook we are using a total of two replicas per service. The Actor Runtime will designate one as the primary and a second as the secondary. Additional replicas all become secondaries.
As you may have read in the Actor Framework documentation, actors achieve a limited transaction-like set of semantics via a quorum commit. Changes are not committed to the primary until they are first written to a quorum of secondaries. After that, the changes are committed to the primary and acknowledged to the client.
Meanwhile, requests from the client to an actor are auto-idempotent. Consider a client that makes a request and never receives an acknowledgement. This could happen for multiple reasons. The server could have crashed before completing the operation, after completing the operation & before acknowledging the operation. Similarly, network connectivity could have been lost. The Actor Framework solves this via that a unique sequence number for every request, and storing the result of the last request in the actor’s replicated state. By doing this, if a client issues request #5 and loses its network connection, it can reconnect then safely re-issue request 5. If the operation already ran, the client will get the previously cached result from actor state. If the request didn’t successfully complete the first time, the request will now execute. Importantly, the complexity of handling this is built into the client-side & server-side logic of the Actor Framework, so users do not need to think about this. They simply call a method.
One not-yet-implemented feature is persistence. All data in actors is made highly available via replication across machines, but all that data is stored in memory. Actor state is not currently persisted to disk or any other network storage mechanism like an Azure blob or table. One consequence is that if the power goes out to the entire cluster, you lose everything. This is an active area for future development.
A traditional database system might model a user using several tables, with one row for profile information, and multiple rows representing friends & images for each person. Scalability challenges would be hit once you exceed the number of operations possible on an individual database. (ie, SQL Server’s new Hekaton engine may max out around 60,000 operations/second.) While databases can go a long ways, enterprises need systems capable of scaling to a much wider scale. Fakebook is an attempt at building a competing vision for data storage. While scale up is important, an actor model with relatively easily separable data like Fakebook should be ideal for scale out. Each actor can maintain its own state in memory, local to that actor. The hope is a database acting as a single point of failure is not necessary.
One common criticism of actor models is that while they are easy to write, they require a significant amount of optimization. We found the same problem, and needed to invest in multi-tenant actors to improve the scalability. Performance is still very much a work in progress for the Actor Framework, but we hope to show some of the optimizations we found particularly valuable, as well as some we hope to make in the future.
One of the challenges is to get the most use out of each machine. There are a finite number of sockets on each machine. Similarly, new actors require allocating state and entries in a naming table that can slow things down. The creation of service instances on the Actor Runtime is unfortunately slow and can gate our performance. Mapping one actor to one process is a horrible idea, and mapping one actor to shared state within a process can often be insufficient. To solve this, we looked into partitioned service instances for hosting multi-tenant actors.
Think of partitioning as creating buckets for actors on various machines. Those buckets can be empty, or you can fill them with multiple actors. By doing this, the cost of creating the buckets is amortized over the number of partitions. Each individual actor is created within its appropriate bucket, and receives its own isolated version of actor state. See Figure 3.
Figure 3 - Partitioned List Actors
This allows for actor creation times to be vastly faster, by about 3 orders of magnitude.
Now let’s draw a more complete picture. The Actor Runtime will use processes on various machines for each service type (like a list service, a dictionary service, our empty actor service, etc). Each process hosts zero or more service instances (both primaries and secondaries, though let’s ignore secondaries for now). Processes take a while to spin up and service instances are expensive to create. In a naïve hosting scenario without partitioning, consider a list service and a dictionary service with many instances, mapping to one collection each. Here is what will be running on a cluster. Service instances are in blue below.
Figure 4 - Non-Partitioned Hosting
In the picture above, creating new individual collections requires creating new service instances, which is an expensive operation. With partitioning in the picture, we create fewer service instances.
Figure 5 - Partitioned Hosting
Replication is done in the Actor Framework at the service instance level. So in this picture, “List A” and “List B” both share the same IActorState in terms of replication. However, this could lead to conflicts if both lists used a field of the same name, allowing one list to scribble over values from a separate list. The Actor Framework provides a further level of state isolation (an IsolatedActorState) that is a sub-space within an IActorState. So partitioned actors share the same replication mechanism & characteristics, but get their own isolated view of their state.
The mapping from name of an individual list is affected by partitioning. Without partitioning, the Actor Runtime will load balance at the service instance level and allocates them to machines in a reasonable way. However when using partitioning, the mapping from name of a list to a list partition is done by hashing the name then mapping it onto a range. For example, a name like “List A” may hash to a value between 1 and 100, and all hash values in the range 1-25 are mapped to list partition 1, 26-50 to list partition 2, etc.
Fakebook employs partitioning for each of the actors it uses – lists, dictionaries, person actors & news feed actors. This substantially reduced the cost to create new people.
Another challenge involves actor to actor communication, where we use IActorProxy to represent a communications channel between two actors. Our initial design required a unique proxy object for every actor-to-actor communication, and cached proxies for quick reuse later. In a simple example, I envisioned 256 people on Fakebook, and each of those 256 people would be a friend with all 255 other people. However, this approach was flawed. Keeping those IActorProxy objects cached required keeping just under 64K sockets open. We tried developing this on one machine and ran out of sockets!
To fix this, we are exploring two approaches – sharing sockets when talking between actor partition buckets, as well as ditching the caching & adding in async support for establishing new actor proxies. We anticipate this will contribute substantial wins.
One of the most significant ways to improve performance is to reduce the chattiness of your protocols over the network. As a simple example, one of our Actor Framework performance tests adds one million integers to a CloudList<T> and sorts them. Adding one million elements one at a time was horribly inefficient. Instead, we needed to change the protocol to support batching to get a higher level of performance. Additionally, readers must note that method calls that turn into network calls are significantly less reliable than normal method calls. These considerations led to a new interface, IListAsync<T>:
namespace System.Cloud.Collections
{
publicinterfaceIListAsync<T> : ICollectionAsync<T>
{
Task<T> GetItemAsync(int index);
Task SetItemAsync(int index, T value);
Task<int> IndexOfAsync(T item);
Task InsertAsync(int index, T item);
Task RemoveAtAsync(int index);
// Less chatty versions
Task AddAsync(IEnumerable<T> items);
Task RemoveRangeAsync(int index, int count);
}
}
The AddAsync and RemoveRangeAsync methods will perform much better. Additionally, since they are async methods, they allow clients to more easily adopt async method calls as a common design pattern, then use continuations to resume execution when those operations have completed.
A similar change was made to actor methods. In the example of setting up friendships between multiple people, changing the ForceMakeFriends method to take an array of friends led to a substantial improvement.
More details on how to run the Fakebook example can be found in the CodePlex site, and we are continuing work on the v0.60 release. Subscribe to the CodePlex feed and our Blog to be kept up to date on the latest developments.
For those of you who are using ActorFx in implementations, we’d always like to hear more about how it’s useful to you and how it can be improved. We are looking forward to your comments/suggestions, and stay tuned for more cool stuff coming in our next release! W3C Pointer Events emerging standard continues to gain traction, advancing support for interoperable mouse, touch, and pen interactions across the web. Further to our previous Blog where we highlighted the work the Dojo team are doing with Pointer Events, we can now confirm an implementation of Pointer Events has now been added to the patch list for Dojo Toolkit 2.0.
Pointer Events makes it easier to support a variety of browsers and devices by saving Web developers from writing unique code for each input type. The specification has earned positive feedback from the developer community -- many are already embracing it as a unified model for cross-browser multi-modal input.
In our previous Blog on W3C Pointer Events, we highlighted feedback shared by members of the jQuery, Cordova, and Dojo communities. The team at Nokia are also excited about progress with the Pointer Events standardization work as Nokia's Art Barstow, Chair of the W3C's Pointer Events Working Group noted:
Google, Microsoft, Mozilla, jQuery, Opera and Nokia are among the industry members working on the Pointer Events standard in the W3C's Pointer Events Working Group. Pointer Events is designed to handle hardware-agnostic multi user inputs input from devices like a mouse, pen, or touchscreen and we are pleased to see it achieve Candidate Recommendation status in W3C. Pointer Events is a great way for developers to enable better user interaction with the mobile Web and we are excited to see the various implementations around the Web that are already underway. Web developers can start coding with Pointer Events today and we look forward to further progress with the standard and adoption within the Web community.
Pointer Events at //Build 2013
During the recent //Build 2013 event, Jacob Rossi of the Internet Explorer (IE) team presented Lighting Your Site Up on Windows 8.1 which included guidance on how Web developers can use the capabilities of Pointer Events to make web sites ‘shine’ across many devices such as touch/mouse/pen, high resolution screens, and screen sizes from phones to desktops, taking advantage of sensors and other hardware innovations. The Internet Explorer 11 Preview implementation has been updated from Internet Explorer 10 to include the latest Candidate Recommendation specification for W3C Pointer Events - see Pointer Events updates in the IE11 Developer Guide for further details.
As we continue to work with the vibrant Web.!!
Adoption ‘Reach.
On Wed May 29th, Josh Holmes (pictured above) of the IE team presented Pointer Events in his session ‘Touch.:
· Custom Migrations Operations were enabled by a contribution from iceclow and this blog post provides an example of using this new feature.
· Pluggable Pluralization & Singularization Service was contributed by UnaiZorrilla.
·..
GENERAL.
Node!.
MS Open Tech Announces Intent to Implement in Blink While Continuing WebKit Implementation
From:
Asir Vedamuthu Selvasingh, Principal Program Manager Microsoft Open Technologies, Inc..
‘Candidate Recommendation’ indicates that the W3C considers the specification widely reviewed and satisfying the Working Group’s technical requirements. It signals a call for additional implementation experience to inform the group.
Learn more about Pointer Events on Web Platform Docs
As you start building, migrating, or testing your apps using Pointer Events on various browser platforms, you should check out the resources available on the Pointer Events Wiki at Web Platform Docs:.
Continuing.
From:...
As always, let us know how the latest release works for you and how you like the new features! To send feedback or questions, just use MSDN Forums or Stack Overflow.
I was fortunate to be invited to speak on behalf of MS Open Tech at last weekend’s LinuxFest Northwest in Bellingham, WA. This was a local event with a wide variety of developers and tech enthusiasts who gathered at Bellingham Technical College to participate in a broad spectrum of presentations, demonstrations, and labs.
My presentation Microsoft, Linux and the Open Source Community was part of the Developing a Community Track at LinuxFest so, given Microsoft Open Technologies, Inc. had just celebrated our One Year Anniversary, I took this opportunity to demonstrate some of the MS Open Tech projects that are enabling the open source community to benefit from new interoperability technology initiatives:
Linux on Windows Azure – Just prior to LinuxFest, the Azure team announced general availability of Windows Azure Infrastructure-as-a-Service. Windows Azure Infrastructure Services enable you to deploy and run durable VMs in the cloud. As well as Windows Server options, the built-in image gallery of VM templates includes Linux images for Ubuntu, CentOS, and SUSE Linux distributions. During my presentation session, I used Windows Azure Web Sites to create a new cloud-based WordPress site including a MySQL instance.
VM Depot – Built on the capabilities of Linux on Windows Azure, VM Depot is a cloud-based catalog of more than 200 open source Linux virtual machine images for Windows Azure contributed by the community. Developed by MS Open Tech, on VM Depot the community can build, deploy and share their favorite Linux configurations and other freely downloadable images, create custom open source stacks, and work with others to build new architectures for the cloud that leverage the openness and flexibility of the Windows Azure platform.
OData – OData is an open data protocol jointly developed by Microsoft, IBM, SAP, Citrix, and other industry partners and currently undergoing standardization via OASIS. We have recently revamped the website and encourage community contributions to develop consumer and producer services using OData as highlighted in the Ecosystem subsection. My demonstration showed how OData can be used by the community to publish and access open government data using the DataLab open source code on GitHub.
Pointer Events - Pointer events is an emerging standard developed by the W3C to define a single device input model – mouse, pen and touch – across multiple browsers. Microsoft contributed the initial specification and is working to demonstrate cross browser interoperability for Pointer Events. MS Open Tech developed an open source Pointer Events prototype for WebKit on HTML5 Labs and submitted the patch to the WebKit community. We encourage the developer community to learn more about Pointer Events on Web Platform Docs and join the #PointerEvents discussion.
I would like to thank the LinuxFest organizers for the opportunity to participate. Events like LinuxFest are an ideal way for us to share the work we do at MS Open Tech with the open source community and seek feedback on our efforts. I look forward to the next opportunity.
Thanks!! | http://blogs.msdn.com/b/interoperability/default.aspx?PageIndex=1&PostSortBy=MostRecent | CC-MAIN-2016-07 | refinedweb | 4,597 | 51.78 |
git-fetch 1 Git 1.6.5.8 Git Manual git-fetch Download objects and refs from another repository git fetch <options> <repository> <refspec>… DESCRIPTION Fet1 ). -n -. -v --verbose Be verbose. : /path/to/repo.git/ They are mostly equivalent, except when cloning. See git-clone1: .ft C [url "<actual url base>"] pushInsteadOf = <other url base> .ft For example, with this: .ft C [url "ssh://example.org/"] pushInsteadOf = git://example.org/ .ft1 , git-config1: .ft C [remote "<name>"] url = <url> pushurl = <pushurl> EXAMPLES Update the remote-tracking branches: .ft C $ git fetch origin .ft The above command copies all branches from the remote refs/heads/ namespace and stores them to the local refs/remotes/origin/ namespace, unless the branch.<name>.fetch option is used to specify a non-default refspec. Using refspecs explicitly: .ft C $ git fetch origin +pu:pu maint:tmp .ft. SEE ALSO git-pull | https://www.kernel.org/pub/software/scm/git/docs/v1.6.5.8/git-fetch.xml | CC-MAIN-2016-26 | refinedweb | 146 | 54.79 |
.
We saw in the previous section how NumPy's universal functions can be used to vectorize operations and thereby remove slow Python loops. Another means of vectorizing operations is to use NumPy's broadcasting functionality. Broadcasting is simply a set of rules for applying binary ufuncs (e.g., addition, subtraction, multiplication, etc.) on arrays of different sizes.
import numpy as np
a = np.array([0, 1, 2]) b = np.array([5, 5, 5]) a + b
array([5, 6, 7])
Broadcasting allows these types of binary operations to be performed on arrays of different sizes–for example, we can just as easily add a scalar (think of it as a zero-dimensional array) to an array:
a + 5
array([5, 6, 7]) dimension. Observe the result when we add a one-dimensional array to a two-dimensional array:
M = np.ones((3, 3)) M
array([[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]])
M + a
array([[ 1., 2., 3.], [ 1., 2., 3.], [ 1., 2., 3.]])
Here the one-dimensional array
a is stretched, or broadcast across the second dimension in order to match the shape of
M.
While these examples are relatively easy to understand, more complicated cases can involve broadcasting of both arrays. Consider the following example:
a = np.arange(3) b = np.arange(3)[:, np.newaxis] print(a) print(b)
[0 1 2] [[0] [1] [2]]
a + b
array([[0, 1, 2], [1, 2, 3], [2, 3, 4]])
Just as before we stretched or broadcasted one value to match the shape of the other, here we've stretched both
a and
b to match a common shape, and the result is a two-dimensional array!
The geometry of these examples is visualized in the following figure (Code to produce this plot can be found in the appendix, and is adapted from source published in the astroML documentation. Used by permission).
The light boxes represent the broadcasted values: again, this extra memory is not actually allocated in the course of the operation, but it can be useful conceptually to imagine that it is.
Broadcasting in NumPy follows a strict set of rules to determine the interaction between the two arrays:
To make these rules clear, let's consider a few examples in detail.
M = np.ones((2, 3)) a = np.arange(3)
Let's consider an operation on these two arrays. The shape of the arrays are
M.shape = (2, 3)
a.shape = (3,)
We see by rule 1 that the array
a has fewer dimensions, so we pad it on the left with ones:
M.shape -> (2, 3)
a.shape -> (1, 3)
By rule 2, we now see that the first dimension disagrees, so we stretch this dimension to match:
M.shape -> (2, 3)
a.shape -> (2, 3)
The shapes match, and we see that the final shape will be
(2, 3):
M + a
array([[ 1., 2., 3.], [ 1., 2., 3.]])
a = np.arange(3).reshape((3, 1)) b = np.arange(3)
Again, we'll start by writing out the shape of the arrays:
a.shape = (3, 1)
b.shape = (3,)
Rule 1 says we must pad the shape of
b with ones:
a.shape -> (3, 1)
b.shape -> (1, 3)
And rule 2 tells us that we upgrade each of these ones to match the corresponding size of the other array:
a.shape -> (3, 3)
b.shape -> (3, 3)
Because the result matches, these shapes are compatible. We can see this here:
a + b
array([[0, 1, 2], [1, 2, 3], [2, 3, 4]])
M = np.ones((3, 2)) a = np.arange(3)
This is just a slightly different situation than in the first example: the matrix
M is transposed.
How does this affect the calculation? The shape of the arrays are
M.shape = (3, 2)
a.shape = (3,)
Again, rule 1 tells us that we must pad the shape of
a with ones:
M.shape -> (3, 2)
a.shape -> (1, 3)
By rule 2, the first dimension of
a is stretched to match that of
M:
M.shape -> (3, 2)
a.shape -> (3, 3)
Now we hit rule 3–the final shapes do not match, so these two arrays are incompatible, as we can observe by attempting this operation:
M + a
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-13-9e16e9f98da6> in <module>() ----> 1 M + a ValueError: operands could not be broadcast together with shapes (3,2) (3,)
Note the potential confusion here: you could imagine making
a and
M compatible by, say, padding
a's shape with ones on the right rather than the left.
But this is not how the broadcasting rules work!
That sort of flexibility might be useful in some cases, but it would lead to potential areas of ambiguity.
If right-side padding is what you'd like, you can do this explicitly by reshaping the array (we'll use the
np.newaxis keyword introduced in The Basics of NumPy Arrays):
a[:, np.newaxis].shape
(3, 1)
M + a[:, np.newaxis]
array([[ 1., 1.], [ 2., 2.], [ 3., 3.]])
Also note that while we've been focusing on the
+ operator here, these broadcasting rules apply to any binary
ufunc.
For example, here is the
logaddexp(a, b) function, which computes
log(exp(a) + exp(b)) with more precision than the naive approach:
np.logaddexp(M, a[:, np.newaxis])
array([[ 1.31326169, 1.31326169], [ 1.69314718, 1.69314718], [ 2.31326169, 2.31326169]])
For more information on the many available universal functions, refer to Computation on NumPy Arrays: Universal Functions.
Broadcasting operations form the core of many examples we'll see throughout this book. We'll now take a look at a couple simple examples of where they can be useful.
In the previous section, we saw that ufuncs allow a NumPy user to remove the need to explicitly write slow Python loops. Broadcasting extends this ability. One commonly seen example is when centering an array of data. Imagine you have an array of 10 observations, each of which consists of 3 values. Using the standard convention (see Data Representation in Scikit-Learn), we'll store this in a $10 \times 3$ array:
X = np.random.random((10, 3))
We can compute the mean of each feature using the
mean aggregate across the first dimension:
Xmean = X.mean(0) Xmean
array([ 0.53514715, 0.66567217, 0.44385899])
And now we can center the
X array by subtracting the mean (this is a broadcasting operation):
X_centered = X - Xmean
To double-check that we've done this correctly, we can check that the centered array has near zero mean:
X_centered.mean(0)
array([ 2.22044605e-17, -7.77156117e-17, -1.66533454e-17])
To within machine precision, the mean is now zero.
One place that broadcasting is very useful is in displaying images based on two-dimensional functions. If we want to define a function $z = f(x, y)$, broadcasting can be used to compute the function across the grid:
# x and y have 50 steps from 0 to 5 x = np.linspace(0, 5, 50) y = np.linspace(0, 5, 50)[:, np.newaxis] z = np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x)
We'll use Matplotlib to plot this two-dimensional array (these tools will be discussed in full in Density and Contour Plots):
%matplotlib inline import matplotlib.pyplot as plt
plt.imshow(z, origin='lower', extent=[0, 5, 0, 5], cmap='viridis') plt.colorbar();
The result is a compelling visualization of the two-dimensional function. | https://nbviewer.jupyter.org/github/donnemartin/data-science-ipython-notebooks/blob/master/numpy/02.05-Computation-on-arrays-broadcasting.ipynb | CC-MAIN-2019-13 | refinedweb | 1,244 | 74.69 |
Scott Guthrie posted a great example of how to create a feed reader using LINQ to XML. Today, I see Tim Heuer's post on the JavaScriptSerializer type in .NET 3.5. So, I thought I would mash them up and show how to use LINQ to implement Tim's idea of converting RSS to JSON. Unfortunately, the JavaScriptSerializer is marked as obsolete with a note to use the DataContractJsonSerializer instead. Here is what a generic HTTP Handler would look like that mashes up these techniques using .NET 3.5.
<%@ WebHandler Language="C#" Class="Handler" %>
using System;
using System.Web;
using System.Linq;
using System.Xml.Linq;
using System.Web.Script.Serialization;
using System.Text;
using System.ServiceModel;
using System.Runtime.Serialization;
using System.Collections.Generic;
public class Handler : IHttpHandler {
public void ProcessRequest (HttpContext context) {
context.Response.ContentType = "application/json"; < 7
select item;
List<FeedItem> itemsList = new List<FeedItem>();
foreach (var item in newPosts)
{
itemsList.Add(new FeedItem { Title = item.Title, Published = item.Published, Url = item.Url, NumComments = item.NumComments });
}
DataContractJsonSerializer ser = new DataContractJsonSerializer(itemsList.GetType());
ser.WriteObject(context.Response.OutputStream, itemsList);
}
public bool IsReusable {
get {
return false;
}
}
}
[DataContract]
public class FeedItem
{
[DataMember] public string Title { get; set; }
[DataMember] public DateTime Published { get; set; }
[DataMember] public string Url { get; set; }
[DataMember] public int NumComments { get; set; }
}
Look at how simple that is! Using LINQ, we are able to get just the posts that were published within the past 7 days, and we serialize the result to JSON. The results are shown below.
[{"NumComments":0,"Published":"\/Date(1188917760000-0500)\/",
"Title":"Using WCF, JSON, LINQ, and AJAX: Passing Complex Types to WCF Services with JSON Encoding",
"Url":"http:\/\/blogs.msdn.com\/kaevans\/archive\/2007\/09\/04\/using-wcf-json-linq-and-ajax-passing-complex-types-to-wcf-services-with-json-encoding.aspx"},
{"NumComments":4,"Published":"\/Date(1188836640000-0500)\/",
"Title":"WCF and LINQ",
"Url":"http:\/\/blogs.msdn.com\/kaevans\/archive\/2007\/09\/03\/wcf-and-linq.aspx"},
{"NumComments":1,"Published":"\/Date(1188652680000-0500)\/",
"Title":"Are you ready for some football?!?!",
"Url":"http:\/\/blogs.msdn.com\/kaevans\/archive\/2007\/09\/01\/are-you-ready-for-some-football.aspx"}]
So very cool. Look at how terse yet readable that code is! Contrast that to something like this from only a couple of years ago... Getting XML From Somewhere Else, which doesn't even cover filtering the XML and converting to JSON... which I probably would have transformed using some bizarre XSLT geekery. This is cleaner and easier to understand.
PingBack from
Thanks for the code! That is some seriously clean code, and you're right, a couple of years ago, this would have been a mess. Can't wait to get the go ahead from my boss to start using this stuff!
I couldn't let it go. Tim's post had me intrigued about how to convert RSS to JSON using some of the
I couldn't let it go. Tim's post had me intrigued about how to convert RSS to JSON using some
Earlier this year I blogged about a new language extensibility feature of C# and VB called "Extension
Earlier this year I blogged about a new language extensibility feature of C# and VB called "Extension
It's been a while since I got time to read up on some blog posts and I stumbled on some interesting
Tip/Trick: Building a ToJSON() Extension Method using .NET 3.5
【原文地址】 Tip/Trick: Building a ToJSON() Extension Method using .NET 3.5 【原文发表日期】 Monday, October 01, 2007
Earlier this year I blogged about&;a new language extensibility&;feature of C# and VB called
Looks like this is an interesting topic to a lot of people since part 1 of this series made it to the | http://blogs.msdn.com/b/kaevans/archive/2007/09/04/use-linq-and-net-3-5-to-convert-rss-to-json.aspx?Redirected=true | CC-MAIN-2015-48 | refinedweb | 623 | 50.33 |
NAME
RAND_egd, RAND_egd_bytes, RAND_query_egd_bytes - query entropy gathering daemon
SYNOPSIS
#include <openssl/rand.h> int RAND_egd_bytes(const char *path, int num); int RAND_egd(const char *path); int RAND_query_egd_bytes(const char *path, unsigned char *buf, int num);
DESCRIPTION
On older platforms without a good source of randomness such as
/dev/urandom, it is possible to query an Entropy Gathering Daemon (EGD) over a local socket to obtain randomness and seed the OpenSSL RNG. The protocol used is defined by the EGDs available at or.
RAND_egd_bytes() requests num bytes of randomness from an EGD at the specified socket path, and passes the data it receives into RAND_add(). RAND_egd() is equivalent to RAND_egd_bytes() with num set to 255.
RAND_query_egd_bytes() requests num bytes of randomness from an EGD at the specified socket path, where num must be less than 256. If buf is NULL, it is equivalent to RAND_egd_bytes(). If buf is not NULL, then the data is copied to the buffer and RAND_add() is not called.
OpenSSL can be configured at build time to try to use the EGD for seeding automatically.
RETURN VALUES
RAND_egd() and RAND_egd_bytes() return the number of bytes read from the daemon on success, or -1 if the connection failed or the daemon did not return enough data to fully seed the PRNG.
RAND_query_egd_bytes() returns the number of bytes read from the daemon on success, or -1 if the connection failed.
SEE ALSO
RAND_add(3), RAND_bytes(3), RAND(7)
Licensed under the Apache License 2.0 (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at. | https://www.openssl.org/docs/manmaster/man3/RAND_egd.html | CC-MAIN-2019-22 | refinedweb | 275 | 70.43 |
7903/issue-run-curl-command-windows-docker-quickstart-terminal
curl -sSL ¦ bash
I am getting the error
bash: line 1: syntax error near unexpected token `newline'
bash: line 1: <HTML>.
Actually, the issue is more likely that the version of curl on your machine doesn't support redirects. In the note below the instructions for the curl command is a note that suggests that if you have errors, update your version of curl.
Note that replacing the URL with the raw GH URL is also a correct solution, but you probably want to update curl regardless.
You can get the new version of ...READ MORE
It seems like the container is not ...READ MORE
You probably know this already but support ...READ MORE
First try restarting the system and then ...READ MORE
Summary: Both should provide similar reliability of ...READ MORE
This will solve your problem
import org.apache.commons.codec.binary.Hex;
Transaction txn ...READ MORE
To read and add data you can ...READ MORE
You can try the following examples on ...READ MORE
To expand on Matthew's answer, each state ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/7903/issue-run-curl-command-windows-docker-quickstart-terminal | CC-MAIN-2021-10 | refinedweb | 192 | 67.96 |
This is the mail archive of the newlib@sourceware.org mailing list for the newlib project.
On Thu, Jan 3, 2013 at 8:47 PM, Steve Ellcey wrote: > Steven, > > Could you test out this patch and see if it works for you. I was able to > build with no problems. I decided to try the minimal change needed and look > at doing some of the cleanup that Richard mentioned in a later patch. > > Steve Ellcey > > > > > diff --git a/newlib/libc/machine/mips/memcpy.S b/newlib/libc/machine/mips/memcpy.S > index fe7cb15..8db8953 100644 > --- a/newlib/libc/machine/mips/memcpy.S > +++ b/newlib/libc/machine/mips/memcpy.S > @@ -56,7 +56,7 @@ > #endif > #endif > > -#if (_MIPS_SIM == _ABI64) || (_MIPS_SIM == _ABIN32) > +#if defined(_MIPS_SIM) && ((_MIPS_SIM == _ABI64) || (_MIPS_SIM == _ABIN32)) > #ifndef DISABLE_DOUBLE > #define USE_DOUBLE > #endif > @@ -203,7 +203,7 @@ > #define REG1 t1 > #define REG2 t2 > #define REG3 t3 > -#if _MIPS_SIM == _ABIO32 > +#if defined(_MIPS_SIM) && (_MIPS_SIM == _ABIO32) > # define REG4 t4 > # define REG5 t5 > # define REG6 t6 This at least builds and passes testing, see. (Not sure why the newlib test results are repeated so many times...) Just let me know if Richard's patch needs testing also. Ciao! Steven | https://sourceware.org/ml/newlib/2013/msg00009.html | CC-MAIN-2019-35 | refinedweb | 195 | 65.22 |
Sunday, we. ?
Here's how it works ?
Animate SVG element to follow a curve
Moving an element along a line is one of those things you think you can figure out on your own but you can't. Then you see how it's done and you think, "I'm dumb".
But you're not dumb. It's totally un-obvious.
We used Bostock's Point-Along-Line Interpolation example as our model. It creates a custom tween that we apply to each circle as an
attrTween transition.
function translateAlong(path) {var l = path.getTotalLength();return function (d, i, a) {return function (t) {var p = path.getPointAtLength(t * l);return "translate(" + p.x + "," + p.y + ")";};};}
That's the custom tween. I'd love to explain how it works, but… well, we have a
getPointAtLength function that SVG gives us by default on every
<path> element. We use it to generate
translate(x, y) strings that we feed into the
transform attribute of a
<circle> element. That part is obvious.
The part I don't get is the 3-deep function nesting for currying. The top function returns a tween, I assume. It's a function that takes
d, i, a as arguments and never uses them. Instead, it returns a time-parametrized function that returns a
translate(x, y) string for each
t.
We know from D3 conventions that
t runs in the
[0, 1] interval, so we can assume the tween gives a "point at t percent of full length of path" coordinates.
Great.
We apply it in our
MigrationLine component like this:
// inside MigrationLine_transform(circle, delay) {const { start } = this.props,[ x1, y1 ] = start;d3.select(circle).attr("transform", `translate(${x1}, ${y1})`).transition().delay(delay).duration(this.duration).attrTween("transform", translateAlong(this.refs.path)).on("end", () => {if (this.state.keepAnimating) {this._transform(circle, 0);}});}componentDidMount() {const { Ncircles } = this.props;this.setState({keepAnimating: true});const delayDither = this.duration*Math.random(),spread = this.duration/Ncircles;d3.range(delayDither, this.duration+delayDither, spread).forEach((delay, i) =>this._transform(this.refs[`circles-${i}`], delay));}
We call
_transform on every circle in our component. Each gets a different
delay so that we get a roughly uniform distribution of circles on the line. Without this, they come in bursts, and it looks weird.
delayDither ensures the circles for each line start with a random offset, which also helps fight the burstiness. Here's what the map looks like without
delayDither and with a constant delay between circles regardless how many there are.
See? No good.
You can see the full MigrationLine code on Github.
Zoom and pan a map
This part was both harder and easier than I thought.
You see, D3 comes with something called d3.zoom. It does zooming and panning.
Cool, right? Should be easy to implement. And it is… if you don't fall down a rabbit hole.
In the old days, the standard approach was to
.call() zoom on an
<svg> element, listen for
zoom events, and adjust your scales. Zoom callback would tell you how to change zoominess and where to move, and you'd adjust your scales and re-render the visualization.
We tried that approach with hilarious results:
First, it was moving in the wrong direction, then it was jumping around. Changing the geo projection to achieve zoom and pan was not the answer. Something was amiss.
Turns out in D3v4, the zoom callback gets info to build an SVG transform. A
translate() followed by a
scale().
Apply those on the visualization and you get working zooming and panning! Of anything :D
// in SVG rendering componentonZoom() {this.setState({transform: d3.event.transform});}get transform() {if (this.state.transform) {const { x, y, k } = this.state.transform;return `translate(${x}, ${y}) scale(${k})`;}else{return null;}}// ...render() {return (<svg width={width} height={height}<g transform={this.transform}>// ...}</g></svg>
Get
d3.event.transform, save it in
state or a data store of some sort, re-render, use it on a
<g> element that wraps everything.
Voila, zooming and panning. ?
You can see the full
World component on Github.
Happy hacking! ?️ | https://swizec.com/blog/livecoding-34-a-map-of-global-migrations-part-3/ | CC-MAIN-2021-43 | refinedweb | 681 | 60.82 |
Still trying to find my blueprints for it (I think its on my Ubuntu box, but my monitor is fried on it, grrr, and I'm sure its not accessible on FTP, I'm on my Windows 10 workstation now), but here is an interesting example of MOSFET motor speed control via GPIO or PWM.
Obsolete: commercial ESC (electronic speed controls) are so low cost (about $5 per) and ubiquitous that building a controller from regular MOSFETS is not practical. Use ESC's instead like the SImonK 30A.
The top center part of it is kind of dumb in that I think their point is to use a pot read by an ADC to do the speed control. However I would do it via UART or software instead. However the rest of the circuit is a great example of real motor speed control using an inexpensive MOSFET driven by MCU or BBB PWM.
End of Obsolete
Part 1
OK, let me explain what I'm doing here. A lot of you realize that what I'm building here is more of a tiny UAV than a toy drone, but I want to have enough processing power on-board that I can do whatever I want.
How its different is my design philosophy in that what I'm trying to do here is shift the majority of my work to a software domain from a hardware/power one. I'm just going to breadboard and build the drone, but all of my flight control will be done with TCP/IP stack with the processor running a service daemon that provides the guidance I'm looking for.
It will be based on a force-moment-mass model based on empirical tests (and weighting the darn thing) that will formally implement a drone control that is smooth, scientific and possibly autonomous that is 100% done in software (based on SENSOR and ACTUATOR section in the physical design). In other words, it will be sweet once finished, capable of long-range semi-assisted flight. I'm thinking of integrating 4/5G data (via USB model model) but only pushing across limited numeric data (caching its video) to keep my usage within plan, but I hope to be able to send a command that will return (via FTP over 4/5G) a nice compressed JPEG still whenever I want to see a location.
I'll just pull the saved video off via manual FTP via wifi once it comes back into range, or just popping the SDCard on the cam itself (which might made more sense, though less techy).
Getting close to that time I need to start ordering stuff from mouser but I need to get a complete BOM together for the physical stuff.
I'll keep you guys in the loop on this one (as a lot of the issues I'm solving are directly related to your projects like voltage conversion) but I want to warn people that the final result may not be open-source as I might choose to release it as a boilerplate commercial version. However I can still make notes here to help in your own projects, and you can shadow my development and processes to develop your own (possibly open source).
Part 2
OK, just got my BBBW wireless acquiring well over about 200 feet (indoors) and about 400 feet (outdoors) so I have enough of a wireless tether to actually fly my drone. Very impressed with this little unit (the BBBW); I was a little concerned over the little antenna but they seem to work nice.
Next step is to integrate my MCU6050 inertial unit to it via P8/9 I2C (only interface on it). Its a 6DOF that I got from Hong Kong for about $5 free shipping, its on a break out board and even has an "attitude engine" on it. Why this is important is that back in the day, iMEMs stuff like piezo gyros and accelerometers were only 1DOF voltage-out, but this one not only has the ADC's onboard, they are controlled with their own MCU that calculates to x, y, z, vx, vy, vz, ax, ay, az, tx, ty, tz and I think even rate) and allows the host (in this case the directly connected BBBW) to just receive the actual data we need.
If you want to experiment with the MCU6050 it is 3.3v so compatible with the BBBW right out of the box, uses I2C and only $5 total on eBay (just search on the part number) comes mounted on breakout board with 0.1 header (like SIP) that you just solder on. I'll post up here as I have successes with interfacing it. I can also provide download locations for my tablet-based drone remote (windows installer) as well as code example on the NEO6M, PIC and MCU integration.
I wanted to go nuts with external MCU's but now I'm thinking that I'll have a 3.3V domain and a 5V (TTL) domain on my power section so I'll get my PIC (5V) doing the PWM for the motor speed control. It will have 8 motors counter-rotating per pod (I think I might have a sweet way of controlling yaw through coordinated power throttling). So I'll be able to finally use my bootleg K150.
I'm thinking maybe voltage divider (resistive... I'll put up some sweet discrete networks here so we have some reference designs to bridge cheaply and small-ly between the two voltage domains and others) to translate signals from TTL (5V) down to 3.3v (we did this with RS232 with the PIC for years and years now), and an led-heatshrinktube-phototransistor from 3.3v up to TTL (5V) forming a tiny single line/single direction opto-isolator.
Actually scratch that, here is a $0.18 jobbie that does exactly the same thing. Looks like optocoupler will do it if I can keep the discretes count low.
Part 3
Here is my autonomous flight planner (so its a little more than a drone).
This is my "map editor" where I read in a Google Earth map and what I do is I calibrate it for 2D cartesian in meters around an origin (in this case Central and Van Buren downtown) and then I calibrate it by clicking two points on the map I know the NEMA coordinates for (which Earth just gives you by pointing and clicking).'
Then I go to my actual flight planner window and I click on the points in my flight path, and set the altitudes for each waypoint and feed it to the BBBW via FTP (its a pretty big file). I just hit the Waypoint button on my drone remote and it just runs the program, so its a good way to fly out of wifi range. I then get someone with an identical tablet to "catch" it when it comes in range, and it would be manually landed, or just the autoland feature that just nulls everything out and causes it to descend slowly enough that it doesn't hurt the drone when it touches down.
It was interesting doing this because you need to set a 3D radius from each waypoint point because it never actually hits the point (more of a sphere of a preset diameter) that once is passes through this imaginary sphere then it knows to go to the next sphere or point.
It also has my environmentally sealed Emerson camera (that I still need to get an SD card for) that is just bolted to the underside of the base board that I just manually hit the video record before taking off. When I get some video I'll post it here.
Part 4
OK (laughing my ass off). Do you guys want to try out the drone remote? Its not done yet, but I have a windows installer for it.
You can run it on Windows 8/9/10 but if you have a Windows tablet, install it (it just uses' VB.NET's stock installer so it should come right off on an uninstall).
Right now all it does is let you push the buttons but it also reads the controls and it normalizes them (little numbers in the title bar). Kind of mess with it a bit to see if its good stuff. Also, you can't use both thumbs at the same time you have to kind of alternate on throttle and stick. Its a fun little beasty.
I've managed to successfully test my BBBW to Win10 tablet wifi connection and its pretty solid with quick acquire. I'm actually thinking this thing will work. I just use socket to shoot over the stacks. Works great, lmao. :D
OK someone download it and try installing it. Let me know if there are any problems and I'll get to correcting them. Lmfao seriously put it on your tablet. Its cool as heck.
Addendum 6/18/18
How to use 3-Phase Drone Motors in Real Life:
How to program the necessary PIC to feed control signals from the R-PI/Beagle via UART serial to the ESC's (via PWM) that control the motors.
(Check the Inet for an existing preprogrammed PIC (or other MCU) that does the UART In to 8-channel PWM out. There might be one out there.)
Thinking of sticking with just a single 5VDC domain (instead of multiple ones) and just using the onboard VR to run the SBC at 3.3V and then pushing the signals up or downhill like this:
Ok where I'm at on this project is after drowning in datasheets and noticing that my drone would look like a porcupine with all the traditional MOSFETs hanging out of it, I switched over to commercially available ESC's (which are basically MOSFET driven anyways).
Check on EBay for super-cheap jobbies out of China or Hong Kong or somewhere.
Still planning on using a PIC (switching over to a PIC16F88 because of its integrated peripherals) to which I'm going to put in a generic UART In to 8 PWM out using a software loop (not the integrated PWM on the chip) and just count the clock cycles. Have everything for this, just am dreading working with PIC assembly language again (MPLAB is still free so just get a bootleg K150 off of eBay and you are ready to go).
Also dreading the I2C and SPI integration on some of my sensors, so I might bit-bang with a glue PIC and then just feed it out somehow back to my R-PI or Beagle via UART. Looking forward to the coding of the actual brains of it though (UNIX C....command line...super-fast).
OK here is a link to some of the technical on this:
I have a force-moment-mass model with some really rough code on it I think in this thread. Poke around Beagleboard for it as it might make a good starting place to start coding a UAV-style drone in that it treats the drone like a computer model with integrating its pitch/yaw/roll moments of inertia (i.e. how difficult it is to spin around its C of G), its mass distribution and the forces of its propellers and gravity), so you can code in C on your controlling SBC.
Its all set up like man-lego in that I talk about the components (hardware and software) but its up to you as to how you build yours!
Addendum: here are some rough notes with some things I've found out that may or may not help.
Firstly when using LiPo's or NmH batteries (what I'm using) its appearing that the 1KV motors (perhaps the most ubiquitous kind) are better for RC plane type drones than quadcopter drones. I'm currently thinking that I need to upgrade to the 3KV versions to get my quadcopter design "off the ground". However you can re-use the 1KV motors for a really nice trust on plane types.
You can't run multiple 3-Phase drone motors off of one ESC (I matched the Amps on the ESC to the motors to start). While useless in flight, I was thinking of doing this for testing, and IT DOESN'T work. It just goes ting-ting-ting (no rotation) on all 4 motors. :(
The best motor mount I've found so far is to use no motor mount. Let me explain. What you do is you chisle (I'm using wooden dowel) out the ends of motor arms and you just machine screw the included motor mount hardware directly to the flattened surface. If you do it exactly right, its structural integrity is sufficient.
What I do for a quick test is grab two of the arms and quickly rotate it with my arms around its central axis; if the arms hold, you have a winner.
Having some success with "struts" (or in this case "cabling" or "ties"). There was a bit of play in the joints under flight stresses so I strung some picture-hanging wire in tension to the landing gear and the motor arms (geometrically) that took that play right out of it and translated it linearly across the entire drone frame.
Important Design Change: I was going to develop a 3-Phase Motor ESC from first principles with regular MOSFETS but found it a lot cheaper to use off the shelf SimonK firmware ESC's (30A version) available on Ebay for about $5 (free shipping) and still using a "glue"-type PIC16Fx MCU to translate a serial commands to the appropriate software-generated PWM that the commercial-available hobby ESC's use to control the motors. This little piece of advice might save you weeks in development.
Implementation Notes: OK, if you mount your ESC's on the drone arms, I've figured out that you should have the boards in your ESC's vertical so when the arms flex that the board's strongest plane is parallel with the plane of the arm's flexion so you don't crack it. in other words, I mount my ESC's on the sides of the arm shaft.
The worst it can do with a bad bend is crack the EVA resin that I applied originally to tack the ESC's to the arms; the ESC would be OK.
I actually afix them with electricians tape so there is a little give in there. They aren't a structural component but they are attached to a structural component so you have to think a little about the stresses and strains, just organically: am I going to snap my little board in two doing....this.
Counter-Rotation on Props
To do counter rotation (having just set up one arm with two motors, opposing each other) is if you think about it logically, both have to be spinning the same way, so when you turn the one motor upside down, its counter rotating. Counter-intuitive I know but think about it and you'll see I'm right.
Make a fist with your thumb pointing up; which way are your fingers curling? Now turn your thumb down and you'll notice that they are curling the opposite rotational way.
To do counter rotation, you need a matched pair of counter-rotating props. The motors are opposing in this design, but both prop thrusts need to go in the same direction (wind down) ergo you need a pair of counter-rotating props that are really cheap on Ebay.
A real prop you can easily figure out which way it thrusts. The higher edge of the prop should be the leading edge when rotating.
REALITY CHECKS!
OK, here are some issues that have come up, and I'll talk about them here so you don't have to make the same mistakes (mistakes take a lot of time and money to correct with this kind of stuff).
Firstly, let me reiterate what I'm trying to do here. At the end of constructing my physical drone, I want to be dropped right at shell access. What I mean by this is When I mount the sensors (the IMU, inertial measurement unit and the GPS) and connect them via UART (finally found some that communicate serial not SPI/I2C) to the host controller (a Beagleboard Black Wireless in this case), that I will shift the focus of the project to software. You can figure out what I mean by this.
I patch in via SSH over Wifi (BBBW's are set up to do this right out of the box btw) and I program in C to access the UARTs. I really wish some of these smaller operators would stop hording their information...makes a trivial task almost impossible to get done. Have a datasheet for your product...a datasheet is not an afterthought either...its a critical aspect of any serious electronic component. If you want to see what one looks like go to mouser.com and just look at a few. Basically, it has to be useful (PIC manuals are great examples, they just enumerate every function with some focus on points that might be confusing).
Why a Beagleboard?
The original reason why I selected a BBBW is because it has multiple UARTS so its basically got what is called an "octupus cable" already on it. No mutidropping and the nightmares associated with that, you basically just talk (bidirectionally) with COM1-COM7. Shave weeks/months off of building time just by selecting an SBC that has a specific feature.
Wiring motors
Ok, the problem I'm having right now is that my motor wiring is wrong. I thought it out, with the assumption that wire color is consistent so long as its the same model. I was wrong (wow huh). The fix is basic: you just have to wire up every motor to its ESC manually and individually, rotating the wires until you get a good strong rotation (there are weak rotation modes btw, so bring it up to full power!).
To wire the power trains (buses?) on it, I use harvested extension cord about the same gauge as the power lines coming off the ESC. This should insure that all of the stuff I'm adding runs as cool as a cucumber. An ounce of prevention is worth a pound of cure. I decided to use "taps" where you wrap the bus around the baseboard, wiring it like how they wire networks with "rat belts". To do a tap you carefully with an exacto blade go around the wire and then cut longitudinally and remove a short length of insulation from the wire in question.
A lot of design changes here (that I will post) a major one being (and write this down) finding IMUs and GPS modules that communicate via basic serial. Here are the part codes that after MUCH research finally found: the Razor IMU (search on "arduino bootloader ATMEGA" for the "correct" unit) and the "NEO8MN" (has....HAS...to be the N variant!) I'll try to update the project parts list with these new part numbers. The main difference is they communicate via serial NOT SPI/I2C.
A major issue: to build one of these things, you have to understand that you batteries have to conduct AMPS of electricity in order for it to fly. The question in my mind is the electrical/physics realities of doing this...those little battery casings need to be conducting entire amps of current (electrons) in order for sufficient power to be available to the motor/props to lift it skyward. We've all seen drones flying, so we know its possible, but I'm thinking it might be an intrinsically difficult task, one that can (1) either be solved by a lot of trial and error and burned-out parts or (2) careful, mathematical analysis of the volts/amps of each component in the SPECIFIC design. In other words, the answer might be you ammeter becomes you new best friend.
Infinite Range: I'm thinking with the integration of a 3/4G data modem with unlimited data service (found one local, cool huh? :D $50/month I think with Boost Mobile [let me check on that]), that I can not just socket over the required flight data (via TCP/IP) AND I can pass real-time video back (I wasn't going to do this as I could only find limited data plans). As long as it fly's in cellular service areas, this idea might be a very cool part of the project. I'm getting the parts for this soon.
Addendum: talked to the guy at Boost. They don't use SIM cards anymore they just pump in the serial number of the modem in question (should be able to get an inexpensive one off of Ebay). However even their unlimited plan is only 50Gb/month at FULL-SPEED and then throttles down (as per network load) after you pass this point). Ok, Good news: how much video is that? The numeric data you pass is diddly, its the video that can be prohibitive. Regular video is 2/Mb/minute. M-JPEG (taking into consideration its inefficency) still takes 70 hours of continous video to exhaust a 50 Gb plan.
To reduce the complexity of integration, I think what I'm going to do is use a wifi-router style modem and talk to it over wi-fi. However I might still do the USB connection eventually.
Good News: got the output stage of my PIC16F88 S3 8-channel PWM controller coded. I got a virus in my microburn software so I got a new PICKIT3 programmer coming in, so I can burn it and test it. Should have the serial input (UART) on it later on today but won't be able to test/debug until I get my new programmer.
Thoughts on Battery Recharging: lots of ideas on this but the ones that seem practical are the following (listed in order of practicality):
(1) Just put two bolts on the base board that as soon as it lands I just hook-up a automotive-style charger and just wait. I pull the jumper-cable style clips off it and its ready to fly again.
(2) Make the batteries into a pack that can just be physically changed.
(3) (kidding) wire up a bunch of microwave-oven magnetrons all facing up and line the bottom of the drone with microwave diodes and just have it hover above them at 50 m until periodic ADV voltage checks say the batteries are full again.
Right now I just cut the rat belts on the battery holders, pop them out and recharge manually in a Harbour Freight NmH charger.
----
OK, got something together for you guys and gals. I'm developing a PIC chip (called an S3,a mod'ed PIC16F88) that has 8 hobby servo/ESC PWM channels on it controllable by UART/serial. If you want to see what it does, download this:
It has the stock VB.NET installer on it so it should come off clean. It controls up to 8 servos/escs and up to 5 LEDs. I'll post the source code for it somewhere AND the firmware for the S3 itself.
The coolest part about it is you can have multiple instances of the S3 program up at the same time (as many comm ports as you have), driving multiple chips, so you can control up to dozens of servos and stuff at the same time. Just hit Hide Configuration (after setting the comm port being used and you have a ton of sliders on a basic control panel.
OK, here's the video on it (it will have full servo-driven pan/tilt, 1 Mpx, hopefully telephoto. WHy am I not using the Adafruit version? You can't get video serial on theirs (only 1 frame/sec) but if you use a uCAM III their UART goes up to 3.2 Mbps and if you read with a BBBW it also has a 3.2 Mbps maximum on its serial. There are reasons why I'm evaluating it.
Here is how it looks on my PC (going to push the MJPG over sockets over 3/4G if I can).
Fact: just finished figuring out how less efficient MJPG (basically a stream of individual JPG files) is from like MPEG/MP4. Guess what....its only 6x less efficient (and I think I can get it down to about 2x by turning down the resolution). So, managable.
OK, to control my seros and ESC's I have a software-based PWM chip (a programmed PIC16F88) that will convert serial to PWM (with 5 extra TTL out channels). Here is a little video of how I program the chip.
Example of successful GPS data stream from NEO GPS module AND decode to latitude and longitude.
About the REAL reliability of iMEMs gyros and accelerometers (scroll down to ArduIMU video): | https://www.hackster.io/woody-stanford/uav-drone-project-eb1dc9 | CC-MAIN-2018-26 | refinedweb | 4,206 | 66.47 |
howtoporttodragonfly
How to port to DragonFly
- howtoporttodragonfly
- How to port to DragonFly
- Preparing the workspace
- Working with GNU developper tools
- Working with imake and xmkmf
- Editing the code
- Patching for pkgsrc
- Submitting a package in pkgsrc
- References
- Notes
This page is intended to provide some useful information to help you port applications to your DragonFly operating system. This system uses the NetBSD Packages Collection (pkgsrc) to manage application ports. A basic knowledge of this collection, of GNU developer tools, and possibly of imake, is required to successfully complete this task. For more information, references to these tools are available in the References section below.
This paper will focus on applications made for the pkgsrc collection. But many information will also be useful to those who simply wants to port applications to DragonFly without specific context.
Preparing the workspace
To port software with pkgsrc, two approaches are possible:
- Add new packages in pkgsrc.
- Modify an existing package.
In both cases, the source code of the package must be extracted into a work directory before starting the porting work. When a source file of the original application is modified, its original will be stored with the extension “. orig”. The utility mkpatches, that we'll see later in this document, will use the original files as a basis to create patches. Please note that the version with the extension .orig must always be that of the original distribution package.
Add a new package
The page Creating a new pkgsrc package from scratch gives the main steps required to add a new application to the pkgsrc collection.This tutorial is very well done and summarizes the main steps required to create a new package.
Modify an existing package
Suppose you would want to modify the application foo/bar from the pkgsrc collection. You then start by visiting its dedicated directory in pkgsrc, and by running the following command:
cd /usr/pkgsrc/foo/bar bmake patch
This will restore the changes previously made, which will be a good starting point for the porting. The bmake command could also be used without options to attempt an immediate first compilation. But you might have to change some files first, like the GNU scripts for example.
Suppose that the bar application of the collection is version 1.0. Let's go into the newly extracted source directory located right here[1]:
cd /usr/pkgobj/bootstrap/work/pkgsrc/foo/bar/work/bar-1.0
Voilà, your ready! The porting of the source code can now begin.
Working with GNU developper tools
If the application to port uses GNU tools like autoconf, GNU scripts shall be the first thing to look at. We'll first make a brief introduction to these tools. Then we will cover some modifications required to adapt their scripts to DragonFly.
Introduction to GNU tools
GNU provides a series of very popular tools which aims at helping C and C++ cross-platform application development. These tools are Autoconf, Automake and Libtool. They can operate independently, but they can also be highly integrated.
Their use in common might not be necessarily easy to master. The GNU Autobook (see the References section), available online for free, can help you find your way with it.
Briefly, the following files are defined by the developers:
- Makefile.am and config.h (automake), to generate the Makefile.in files.
- configure.ac (autoconf), to generate the configure script.
The aclocal tool generates the file aclocal.m4, which contains dependencies for the configure script. It is usually run before other tools. The autoconf generated configure script will in turn generate the makefiles from other generated files, like Makefile.in. So, there are several levels of file generation.
The first level generation is called bootstrapping (see the Bootstrap section). This process generates a set of files and directories right in the current build directory. For example, in DragonFly 2.8, the files generated by default are:
autom4te.cache/ .deps/ configure missing Makefile.in install-sh depcomp config.sub config.guess aclocal.m4
When porting an application, it is usually not necessary to regenerate these files and directories. You simply need to adjust the Makefile and configure files. But occasionally, you'll have no other choice. The following sections attempt to give you some leads to follow.
Editing configure, config.guess and config.sub
When the script configure is run, it creates settings suitable for the host platform. In cases where DragonFly is not on the list of supported systems, we'll need to add it.
By default, the command uname -m is used to detect the host platform. In our case, it will return the DragonFly label. The host_os variable will be defined after this value and will take the form “dragonflyX” (where X is the version of the host system, ex. 2.8). The configure script then uses this information to define OSARCH. A good place to start the port would be to add the following definition:
case "${host_os}" in […] dragonfly*) OSARCH=DragonFly ;;
The host_os variable is normally generated by running config.guess. It can also be explicitly defined by using the “build” option of the configure script.
Then you can continue by searching through the script to locate:
- Other occurrences of case "${host_os}".
- Occurrences of the “BSD” string.
- Changes made to other BSD systems, like NetBSD for example. If the software has already been brought to one of these systems, and it will often be the case, this will allow you to quickly find many of the modifications required for DragonFly. Of course this is not to take for granted.
Ideally, you would have to review the entire configure script to make sure that all necessary changes were made. Some will prefer to go by trial and error, by first changing what's obvious and then actively testing the results.
The config.guess file may also need to be reviewed. But the latest versions of this script already include DragonFly in its supported systems. Running it on DragonFly 2.8, for example, returns i386-unknown-dragonfly2.8.
The config.sub file contains a set of subroutines used by the configure script. If this file is not a symlink, it may also need to be changed. But the DragonFly system is already included by default in it, but possibly lightly implemented.
One special thing to consider in the review of GNU scripts for DragonFly is that some configuration scripts tries to collect all BSD like systems like this:
case "$uname" in *BSD*) […] esac
In our case, uname will contain the string DragonFly, without the “BSD” suffix. This pattern should be searched to explicitly add DragonFly after the sequence:
case "$uname" in *BSD*|DragonFly) […] esac
You might want to solve this problem by forcing variables like host_os and OSARCH to “DragonflyBSD”. But it may be more difficult than simply searching for occurrences of *BSD and applying the necessary changes.
Regenerate GNU scripts
There may be cases where you would want to regenerate the GNU scripts rather than just port them. Some basic knowledge are required to perform this task. For those who do not have this knowledge, the GNU Autobook is an excellent reference for it. We'll briefly cover the topic here, by giving some basic information and clarifying some difficulties.
Autoconf compatibility
As mentioned before, autoconf is used to regenerate the configure script. Operating systems are not bundled with the same version of it. We must therefore see to use the correct version of GNU tools if we want to regenerate completelly an application's build scripts.
The makefile.am file
This file contains macros needed to generate the Makefile.in files, which in turn generates the makefiles. The automake tool defines a syntax of its own. It can generate complex and portable makefiles from a small makefile.am script.
The configure.ac or configure.in file
This file contains macros necessary to generate the configure script. These are m4 macros, which generates a shell script when you run autoconf. Required changes to this file are similar to those proposed for the configure script itself.
Bootstrap
The procedure aiming at regenerating all the GNU build files is called bootstrap. Some ports are bundled with a script of this name. It runs all the required tools in the correct order. The typical case will define the following sequence:
aclocal \ && automake --gnu --add-missing \ && autoconf
Compiling with FreeBSD as build target
As DragonFly is a FreeBSD fork, many are building it's ports by using the FreeBSD build target. Although it sometimes works well, like for PostgreSQL, this is risky because compile compatibility with FreeBSD is not among DragonFly's goals. You must therefore expect that this shortcut is less and less usable over time, or can possibly generate badly optimized binaries.
In cases where this method still works, they may still be some changes required in GNU scripts. As mentioned earlier, scripts sometime tries to catch all the BSD variants by using the “*BSD” string. In these cases, without minimal changes, the final results might not entirely be what was anticipated for FreeBSD.
Working with imake and xmkmf
Arguably, imake and xmkmf could be seen as the old building tools for those wishing to develop portable applications. These tools were designed for the venerable X Window System and its applications, themselves brought to many operating systems, including UNIX®, the BSD family, GNU/Linux, AIX®, Solaris® and Windows®. This section will cover briefly these build tools, because they have been gradually discarded in favor of the GNU build tools.
X Window normally comes with a lib/X11/config directory, which contains macro files dedicated to xmkmf. This directory contains files with extensions *.cf, *.tmpl and *.rules.
- *.cf files contains settings specific to it's hosts. X Window comes normally with a DragonFly.cf file.
- *. rules files defines macros that implement the rules that can be used in Imakefile files.
- *. tmpl files contains templates that defines compile options.
As for GNU automake, these macros are used to generate makefiles for the operating system that hosts the X Window System. The normal xmkmf's procedure is to define Imakefile files, then to build the makefiles by running xmkmf. This tool will in turn call imake, which is a C preprocessor interface for the make tool. The tool xmkmf will invoke imake with the appropriate arguments for DragonFly.
When xmkmf is executed with the -a option, it automatically runs make Makefiles, make includes and make depends.
Here's a simple Imakefile example:
BINDIR = /usr/local/bin .SUFFIXES: cpp CXXEXTRA_INCLUDES = -I. -I.. -I/usr/pkg/include LOCAL_LIBRARIES = -L/usr/pkg/lib EXTRA_LIBRARIES = $(XMLIB) $(XTOOLLIB) $(XLIB) $(XMULIB) $(XPMLIB) CXXDEBUGFLAGS = -O2 -g HEADERS = bar.h SRCS = bar.cpp OBJS = bar.o ComplexCplusplusProgramTarget(bar)
Note the similarity of Imakefile with makefiles. You can see the Imakefile as makefiles with macros.
In the example above, the ComplexCplusplusProgramTarget macro defines a rule to generate the C++ "bar" binary, according to the parameters that precedes it. Notably, the parameter .SUFFIXES indicates that the source files will have the extension “.cpp”.
In DragonFly, imake and xmkmf are available by installing the package devel/imake. For more information, links are given in the References section.
Editing the code
To tailor a program to DragonFly, you can either use the definition used to identify the DragonFly system or use the definitions generated by the GNU tools.
The DragonFly definition
Idealy, an application's code would be reviewed in full to ensure that it uses the right functions and algorithms for DragonFly.
In fact, the method mostly used is probably to try to compile, to correct the code when it does not, and to test the final results. When the code is adapted to DragonFly, it may look like this:
#if defined(__DragonFly__) /* code specific to DragonFly */ #elif /* code for other systems */ #endif
An initial change might be to scan the definitions for the other BSD variants, as FreeBSD or NetBSD. We often see this kind of syntax in the code tree:
#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__) || defined(__Darwin__) || defined(__DragonFly__) #include <fcntl.h> #include <net/route.h> #endif
GNU Tools definition
Porting code can focus on identifying available functions rather than on identifying host systems. A typical case would be the bcopy function, normally used instead of strcpy on BSD derivatives. Rather than referring to the host system, the code might look like this (from the GNU Autobook[2]):
#if !HAVE_STRCPY # if HAVE_BCOPY # define strcpy(dest, src) bcopy (src, dest, 1 + strlen (src)) # else /* !HAVE_BCOPY */ error no strcpy or bcopy # endif /* HAVE_BCOPY */ #endif /* HAVE_STRCPY */
This guarantees a form of compatibility between systems at the code level. Unfortunately, this approach is moderately used, which generates more work for porters.
The BSD4.4 Heritage
Many developers are unaware that a BSD definition was created for BSD4.4 and it's derivatives, as DragonFly, FreeBSD and NetBSD. This definition is used like this (from pkgsrc developer's guide[3]):
#include <sys/param.h> #if (defined(BSD) && BSD >= 199306) /* BSD-specific code goes here */ #else /* non-BSD-specific code goes here */ #endif
The NetBSD group, which maintains pkgsrc, recommends the use of this definition. In fact, it is rarely used. In the specific case of DragonFly, this definition no longer exists. Therefore it must not be used.
Patching for pkgsrc
Ideally, a port to DragonFly shall always be at the project level. But realistically, it is faster and easier to modify applications at the pkgsrc level.
pkgsrc contains the patches and some basic information needed to build a package in the package directory (by default
/usr/pkgsrc/<category>/<package>).
The
pkgtools/pkgdiff suite of tools helps with creating and updating patches to a package. Extract the source code into the work directory (by default
/usr/pkgobj/bootstrap/work/pkgsrc/<category>/<package>/work/) by invoking
bmake patch
from the package directory. This fetches the source code if necessary, extracts it and applies any existing pkgsrc patch, saving the unpatched files with a
.orig extension in the work directory.
To create a new patch, save a copy of the original file with that same
.orig extension. If it exists, just keep it – do not overwrite or change
.orig files or your patches will not apply later on! You may choose to use
pkgvi from the
pkgdiff suite to automate this.
You can preview the patches using
pkgdiff <file>. To generate all patches invoke
mkpatches
from the package directory (not the work directory!) The new patches will be saved in the
patches/ directory along with backups of the previous patchset. When you are content with the generated patches commit them and update the patch checksums:
mkpatches -c bmake makepatchsum
You may also revert to the old patches by calling
mkpatches -r.
Now clean up and try to rebuild your package:
bmake clean bmake
If you have any other changes to add, you can remove the package again and repeat these steps.
Submitting a package in pkgsrc
If you plan to port a program to DragonFly and wish to share your work, pkgsrc is ideal for you. The first thing to do is to inquire whether the application is already in the collection. If not, you're free to add your work and be registered as a port maintainer.
This section will attempt to give you some minimal guidance on submitting changes in the pkgsrc collection. This information is incomplete, please consult the page Submitting and Commiting carefully before submitting anything for real.
A source code package can be submitted with the gtk-send-pr package (pr=Problem Report), or by visiting the page NetBSD Problem Report. The indications given by the pkgsrc developer's guide in connection with this tool are summarized here:
.” ̶
It is also possible to import new packages in pkgsrc-wip. See for more information.
References
The pkgsrc developer's guide
GNU “autobook”, or “GNU AUTOCONF, AUTOMAKE, AND LIBTOOLS”, from Gary V. Vaughan | http://www.dragonflybsd.org/docs/howtos/howtoporttodragonfly/ | CC-MAIN-2016-30 | refinedweb | 2,643 | 56.55 |
Currently, I have 4 controllers/categories on my UI ready and working.
However, I have 7 controllers in my folder. The expectation is to have them show up, along with the others, in the screenshot above. I can't figure out why they aren't showing. Even the JSON view isn't picking them up. I've built and rebuilt the project but to no avail. Any ideas?
Solved!
Go to Solution.
I believe the issue was that inside the controller, the name of the class didn't in with the word "Controller". It ended with DTO. I removed that and it showed up.
I went from
public class OrderControllerDTO : ApiController
to
public class OrderController : ApiController
View solution in original post | https://community.smartbear.com/t5/Swagger-Open-Source-Tools/Adding-New-Controllers-Not-Showing-in-UI/m-p/195397/highlight/true | CC-MAIN-2020-10 | refinedweb | 121 | 77.74 |
- 05 Apr, 2017 7 commits
Fix wiki commit message Closes #20389 See merge request !10464
Do not set closed_at to nil when issue is reopened See merge request !10453
Handle SSH keys that have multiple spaces between each marker See merge request !10466
[ci skip]
-
- blackst0ne authored
- 04 Apr, 2017 33 commits
Backport changes of ee fix for transient failure in environments spec See merge request !10459
- Clement Ho authored
Use a less memory-intensive sourcemap when running in CI See merge request !10460
Fix a transient spec failure in "Admin Health Check" feature spec Closes #30461 See merge request !10454
Inlude the password_automatically_check param as permitted config in the user create_service Closes #30335 See merge request !10386
Use `sign_in` instead of `login_as` when we're not testing login flow See merge request !10296
- Alfredo Sumaran authored
Resolve "Expandable folders for environments" Closes #28732 See merge request !10290
Signed-off-by: Rémy Coutable <remy@rymai.me>
- Felipe Artur authored
- Alfredo Sumaran authored
Prevent clicking on disabled button Closes #29432 See merge request !9931
- Clement Ho authored
Refactor test_utils bundle See merge request !10437
- Achilleas Pipinellis authored
Fetch the default number of commits (20) for docs:check jobs Closes #30451 See merge request !10449
- DJ Mountney authored
This param is passed to service in two places, one is in the build_user for non ldap oauth users. And the other is in the initial production admin user seed data. Without this change, when setting up GitLab in a production environment, you were not being given the option of setting the root password on initial setup in the UI.
This is a proof of concept for gitlab-org/gitlab-ce#30196. The actual login procedure is well-tested by `spec/features/login_spec.rb`, and we don't gain anything by also thoroughly testing it here, in our second-slowest feature spec. In fact, it only slows us down! So instead we use `sign_in` from the `Devise::Test::IntegrationHelpers` module, which just sets the current user at the Warden level. This drastically reduces the "setup" phase of every test in this file. A non-scientific test run saw this drop from 633 to 231 seconds.
Fixes milestone/merge_request API endpoint to really scope the results See merge request !10369
Fix issues importing forked projects Closes #26184 and #29380 See merge request !9102
Fixed issue boards having a vertical scrollbar Closes #30209 See merge request !10312
Fix blob highlighting in search Closes #30400 See merge request !10420
Pass GitalyAddress to workhorse See merge request !10447
Signed-off-by: Rémy Coutable <remy@rymai.me>
Improved Environments Metrics UX Closes #29227 See merge request !9946
Delete unused icons; add empty lines to svg files See merge request !10415
Disable support for Gitaly PostReceivePack See merge request !10444
Bump Gitaly server version to 0.5.0 See merge request !10446
list recommended version of PostgreSQL See merge request !10190
- Ben Bodenmiller authored
- Douwe Maan authored
Adds git terminal prompt env var to application rb Closes #24187 See merge request !10372
- Jacob Vosmaer authored
remove useless queries with false conditions (e.g 1=0) Closes #29492 See merge request !10141
- Kamil Trzciński authored
Don't autofill kubernetes namespace See merge request !10438 | https://foss.heptapod.net/heptapod/heptapod/-/commits/bdcd23b297a0234afcb8aae32bc215e827da09fc | CC-MAIN-2022-21 | refinedweb | 533 | 66.84 |
Themes are a sure way to add a vibrant touch to your apps. With Flutter's theming support, you can customize literally everything with only a few lines of code. What if you want to let your users change the themes on the fly though? At the very least, every app should support light and dark mode.
One option is to hack something together using StatefulWidgets. Another, and better option, is to use the flutter_bloc library to create an extensible and manageable theme switching framework.
Project Setup
Before we get started building the UI, let's add a few dependencies. We obviously need flutter_bloc and we're also going to add equatable - this is optional, but should we need value equality, this package will do the repetitive work for us.
pubspec.yaml
... dependencies: flutter: sdk: flutter flutter_bloc: ^0.20.0 equatable: ^0.4.0 ...
Let's also create the basic scaffolding of this project. The most important part is the global/theme folder - that's where the ThemeBloc and other related code will be located. After all, themes should be applied globally.
The project structure
Making Custom Themes
Before doing anything else, we first have to decide on the themes our app will use. Let's keep it simple with only 4 themes - green and blue, both with light and dark variants. Create a new file app_themes.dart inside the theme folder.
Flutter uses a class ThemeData to, well, store theme data. Since we want to configure 4 distinct instances of ThemeData, we will need a simple way to access them. The best option is to use a map. We could use strings for the keys...
app_themes.dart
final appThemeData = { "Green Light": ThemeData( brightness: Brightness.light, primaryColor: Colors.green, ), ... };
As you can imagine, as soon as we add multiple themes, strings will become cumbersome to use. Instead, let's create an enum AppTheme. The final code will look like this:
app_themes.dart
import 'package:flutter/material.dart'; enum AppTheme { GreenLight, GreenDark, BlueLight, BlueDark, } final appThemeData = { AppTheme.GreenLight: ThemeData( brightness: Brightness.light, primaryColor: Colors.green, ), AppTheme.GreenDark: ThemeData( brightness: Brightness.dark, primaryColor: Colors.green[700], ), AppTheme.BlueLight: ThemeData( brightness: Brightness.light, primaryColor: Colors.blue, ), AppTheme.BlueDark: ThemeData( brightness: Brightness.dark, primaryColor: Colors.blue[700], ), };
Adding the Bloc
Having custom themes created, we can move on to create a way for the UI to initiate and also listen to theme changes. This will all happen inside a ThemeBloc which will receive ThemeChanged events, figure out which theme should be displayed, and then output ThemeState containing the proper ThemeData pulled from the Map created above.
If you're new to the Bloc library or the reactive Bloc pattern in general, check out the following tutorial.
While we could create all the files and classes manually, there's a handy extension for both VS Code and IntelliJ which will spare us some time. At least in VS Code, right click on the theme folder and select Bloc: New Bloc from the menu. The name should be "theme" and select "yes" to use the equatable package. This will generate 4 files. One of them, called "bloc", is a barrel file which just exports all the other files.
ThemeEvent
There will be only one event ThemeChanged which pass the selected AppTheme enum value to the Bloc.
theme_event.dart
import 'package:equatable/equatable.dart'; import 'package:meta/meta.dart'; import '../app_themes.dart'; abstract class ThemeEvent extends Equatable { // Passing class fields in a list to the Equatable super class ThemeEvent([List props = const []]) : super(props); } class ThemeChanged extends ThemeEvent { final AppTheme theme; ThemeChanged({ this.theme, }) : super([theme]); }
ThemeState
Similar to the event, there will be only a single ThemeState. We will actually make the generated class concrete (not abstract), instead of creating a new subclass. There can logically be only one theme in the app. On the other hand, there can be multiple events which cause the theme to change.
This state will hold a ThemeData object which can be used by the MaterialApp.
theme_state.dart
import 'package:equatable/equatable.dart'; import 'package:flutter/material.dart'; import 'package:meta/meta.dart'; class ThemeState extends Equatable { final ThemeData themeData; ThemeState({ this.themeData, }) : super([themeData]); }
ThemeBloc
Bloc is the glue which connects events and states together with logic. Because of the way we've set up the app theme data, ThemeBloc will take up only a few simple lines of code.
theme_bloc.dart
import 'dart:async'; import 'package:bloc/bloc.dart'; import '../app_themes.dart'; import './bloc.dart'; class ThemeBloc extends Bloc<ThemeEvent, ThemeState> { ThemeState get initialState => // Everything is accessible from the appThemeData Map. ThemeState(themeData: appThemeData[AppTheme.GreenLight]); Stream<ThemeState> mapEventToState( ThemeEvent event, ) async* { if (event is ThemeChanged) { yield ThemeState(themeData: appThemeData[event.theme]); } } }
Changing Themes
To apply a theme to the whole app, we have to change the theme property on the root MaterialApp. The ThemeBloc will also have to be available throughout the whole app. After all, we need to use its state in the aforementioned MaterialApp and also dispatch events from the PreferencePage, which we are yet to create.
Let's wrap the root widget of our app be a BlocProvider and while we're at it, also add a BlocBuilder which will rebuild the UI on every state change. Since we're operating with the ThemeBloc, the whole UI will be rebuilt when a new ThemeState is outputted.
main.dart
import 'package:flutter/material.dart'; import 'package:flutter_bloc/flutter_bloc.dart'; import 'ui/global/theme/bloc/bloc.dart'; import 'ui/home/home_page.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { Widget build(BuildContext context) { return BlocProvider( builder: (context) => ThemeBloc(), child: BlocBuilder<ThemeBloc, ThemeState>( builder: _buildWithTheme, ), ); } Widget _buildWithTheme(BuildContext context, ThemeState state) { return MaterialApp( title: 'Material App', home: HomePage(), theme: state.themeData, ); } }
Of course, the UI won't ever be rebuilt just yet. For that we need to dispatch the ThemeChanged event to the ThemeBloc. Users will select their preferred theme in the PreferencePage, but first, to make the UI a bit more realistic, let's add a dummy HomePage.
home_page.dart
import 'package:flutter/material.dart'; import '../preference/preference_page.dart'; class HomePage extends StatelessWidget { Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text('Home'), actions: <Widget>[ IconButton( icon: Icon(Icons.settings), onPressed: () { // Navigate to the PreferencePage Navigator.of(context).push(MaterialPageRoute( builder: (context) => PreferencePage(), )); }, ) ], ), body: Center( child: Container( child: Text( 'Home', style: Theme.of(context).textTheme.display1, ), ), ), ); } }
PreferencePage is where the ThemeChanged events will be dispatched. Again, because of the way we can access all of the app themes, implementing the UI will be as simple as accessing the appThemeData map in a ListView.
preference_page.dart
import 'package:flutter/material.dart'; import 'package:flutter_bloc/flutter_bloc.dart'; import 'package:theme_switching_prep/ui/global/theme/app_themes.dart'; import 'package:theme_switching_prep/ui/global/theme/bloc/bloc.dart'; class PreferencePage extends StatelessWidget { const PreferencePage({Key key}) : super(key: key); Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text('Preferences'), ), body: ListView.builder( padding: EdgeInsets.all(8), itemCount: AppTheme.values.length, itemBuilder: (context, index) { // Enums expose their values as a list - perfect for ListView // Store the theme for the current ListView item final itemAppTheme = AppTheme.values[index]; return Card( // Style the cards with the to-be-selected theme colors color: appThemeData[itemAppTheme].primaryColor, child: ListTile( title: Text( itemAppTheme.toString(), // To show light text with the dark variants... style: appThemeData[itemAppTheme].textTheme.body1, ), onTap: () { // This will make the Bloc output a new ThemeState, // which will rebuild the UI because of the BlocBuilder in main.dart BlocProvider.of<ThemeBloc>(context) .dispatch(ThemeChanged(theme: itemAppTheme)); }, ), ); }, ), ); } }
Now launch the app and test it out! Of course, once the app is closed, the selected theme won't be remembered on the next launch. You can use a persistence library of any sort, such as simple preferences, SEMBAST or even Moor. Just persist the selected theme inside the ThemeBloc whenever the ThemeChanged event is dispatched.
Conclusion
Changing themes is a feature which every production app should have. With flutter_bloc and some clever design decisions along the way, we managed to painlessly change the themes while keeping the code maintainable, organized and clean.
pls upload the code for simple preferences for saving the state . so that next time it loads on selected theme.
can’t get it done
I’d recommend you to check out an extension to the flutter_bloc library (from the same author) called hydrated_bloc. It can automatically persist the last state of a Bloc.
Very good as usual.
Please remake your “firebase-firestore-chat-app” in flutter, implementing flutter_bloc & Equatable as well. Thank you in advance.
Thank you! I’m certainly going to make a Firebase Flutter app course.
Hello Matej, cool tutorial! I followed with success but i now want to persist the selected theme for the user. I tried using SharedPreferences to store the selected theme as a string, but in the initialState getter of the ThemeBloc i can’t use async function to wait for SharedPreferences to return the stored value. Any pointers on how to go about this? Also I use enum_to_string package to convert the stored string theme value back to enum.
Thanks!
Hey Antonis! To be honest, I didn’t try the proposed solutions at the end of this tutorial myself 😅. You should take a look at the hydrated_bloc package – it persists the last bloc state automatically.
Hello all,
I have completed the theme persistence task. Check the code here.
Thanks.
Great use of HydratedBloc! 👌
Hi, thank you for this great tut!
I believe there’s an error in the URI in preference_page.dart
import ‘package:theme_switching_prep…
Should be =>
import ‘package:theme_switching_bloc…
(according to your YouTube video tutorial) | https://resocoder.com/2019/08/09/switch-themes-with-flutter-bloc-dynamic-theming-tutorial-dark-light-theme/ | CC-MAIN-2020-24 | refinedweb | 1,600 | 59.4 |
This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.
Hi all, Looks like gcc 4.3 has some rather inconvenient changes in C++ FE, with the latest trunk. Lets see with an example : [~]> cat test.cpp #define foo bar #define foo baz [~]> g++ -c test.cpp test.cpp:2:1: error: "foo" redefined test.cpp:1:1: error: this is the location of the previous definition I don't know the reasoning behind this change but this breaks many C++ programs unless -fpermissive is used. Why? Because everybody loves to install their own config.h (Python, libmp4v2 being nice examples) which just carelessly #define anything its asked for with ifndef ... endif . Now flash back to real world: this breaks any C++ application that uses Python, libmp4v2, libjpeg and possibly many others. And I think this is a real bad behaviour change and I am not sure if its worth all the trouble. So I am asking, would C++ FE maintainers would agree to revert this into a warning as in C FE now. I welcome and value your comments, so please speak up. Regards, ismail -- Never learn by your mistakes, if you do you may never dare to try again. | http://gcc.gnu.org/ml/gcc/2008-01/msg00065.html | crawl-002 | refinedweb | 208 | 75.3 |
>>>> On 05.18 Bill Pringlemeir wrote: >> Why don't the build scripts run a dummy file to determine where >> the floating point registers should be placed? >> >> ... const int value = offsetof(struct task_struct, >> thread.i387.fxsave) & 15; ...>>>>> "JAM" == J A Magallon <jamagallon@able.es> writes: JAM> That is not the problem. The problem is that the registers have JAM> to lay in a defined way, transcribed to a C struct, and that JAM> pgcc lays badly that struct.Yes, I understand that. I was showing a way to find the value of paddingneeded to align the register store in the structure. Perhaps I should haveshown a mod to asm/processor.h,... /* floating point info */#if PAD_SIZE /* not needed if gcc accepts zero size arrays? */ unsigned char fpAlign[PAD_SIZE];#endif union i387_union i387;...Before compiling the `real source', the dummy file would be compiledwith PAD_SIZE set to zero. Then objdump (or some other tool) can findout what the value is. Then when the task_struct is compiled in thekernel, PAD_SIZE is set to the appropriate value to align thestructure.I was describing a way to make things independent of the compiler layoutof the structs. However, this complicates the build process, and peoplemight not like the padding due to cache alignment details.I am pretty sure what I am saying works... It might not be right though.regards,Bill Pringlemeir.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2001/5/18/131 | CC-MAIN-2018-43 | refinedweb | 254 | 67.55 |
Computer Science Archive: Questions from March 14, 2009
- Anonymous asked)... Show more// setprice per item
public void setPricePerItem( double price )
{
pricePerItem = ( price <0.0) ? 0.0 :price; //validate price
} // endmethod setPricePerItem
Based off this post: computer-science-topic-5-504359-cpi0.aspx
The code above does not make senseto me. It seems like backwards thinking because to me it says ifthe price is less than zero set pricePerItem equal to price else ifprice is greater than zero, price is set to zero and pricePerItemis equal to price. Is my thinking wrong?
• Show less1 answer
- Anonymous askedImplement a simulation (using C or C++) of the fully-asso... Show moreHi , I need help in the following question:
Implement a simulation (using C or C++) of the fully-associativecache. Assume an index table with a single entry per line, with a16 byte line and 1024 lines. Use a 16k entry collision table(notice you don't need a data array, nor pointers to it). Make itwrite back cache. use a true LRU replacement in the collisiontable. also use collision chain reordering: whenever a memorylocation is accessed, and the access proves to be in the collisionchain table, move the entry to the primary table and make the oldprimary table entry the first entry in the collision chain thefirst entry in the collision chain.
the output should be:
1. a count of all memory access, by type.
2. cache hit rate, per memory access type.
3. mean number of collision chain probes, per memory access type(an access that is successful in the primary table counts 0 )
4. number of cache line writebacks required.
Thank you so much • Show less0 answers
- Anonymous askedHow does this code work? I know it has something to do... Show morefor ( Payablecurrent Payable :payableObjects )
How does this code work? I know it has something to do witharrays.
• Show less1 answer
- invincible47 askedexample: 55 is ASCII val... Show morei have ASCII values. how can i convert them into theircorresponding values.example: 55 is ASCII value andthe corresponding value is 7.• Show less4 answers
- Anonymous asked,"Do you want t... Show moreHi, can anyone help me how to continue this code:JOptionPane.showConfirmDialog(null,"Do you want toplay a game ? ");• Show lessif the player said no or cancel, thensystem.exit(0)Otherwise, the player can continue thegameI just need help in the cases where the playerpresses No and Cancel;Thanks1 answer
- Kyorochan asked1) What is the largest job that could be entered witho... Show more
Hello.
Please help me to solve this question.
1) What is the largest job that could be entered withoutcompacting the memory?
• Show less1 answer
- Kyorochan askedWhy doesn't the operating system always use compact memo... Show moreHello.
Please help me to solve this problem.
Why doesn't the operating system always use compact memory when anew job enters?
Thank you.
I'll give you lifesaver :)
• Show less1 answer
- invincible47 asked"enter expression let(46+... Show more#include <iostream>
using namespace std;
int main()
{char ch[50];• Show less
cout<<"enter expression let(46+56)";
cin>>ch;
return 0;
}1 answer
- Anonymous asked1 answer
- Anonymous asked//program to display the unique numbers ente... Show moreCan you write in BASIC language??? Will ratelifesaver
//program to display the unique numbers entered.
#include<iostream>
using std::cout;
using std::cin;
using std::endl;
in main()
{
int i:
int index = 0;
int myNumber;
int myArray{20];
//Prompt user for input
cout << "n\nA program to display theunique numbers entered. \n"';
for (int count = 1; count <=20;count++)
{
cout<< "\n Enter number" << count<< " : ";
cin >>myNumber;
if (myNumber< 10 || myNumber > 100)
{
cout << "\tInvalid range.";
continue;
}
// Search in the arrayfor the existence of the number
for ( i = 0; i< index; i++)
{
if (myArray[i] ==myNumber)
break;
}
if ( i ==index)
{
myArray[index] = myNumber;
index++
}
else
cout <<"t\duplicate";
}
// Output the unique number the user entered
cout << "\n\nUnique numbers in thearray are as follows.";
for ( i= 0; i < index; i++)
cout << "\t" <<myArry[i];
cout << "\n\n\t";
system ("pause");
return 0;
}
• Show less2 answers
- Praggy askedHey guys I know this is a computer science forum, and myassignment does not necessarily have anythin... Show moreHey guys I know this is a computer science forum, and myassignment does not necessarily have anything to do with it. But Ithought this was the best way to ask since there is no other forumregarding my assignment.Ok my assignment is:Make HTML form that submits username and password to a PHPprogram that1.) Validates:ScreenAuthorizes Form request by verifying whether usernameand password are valid by checking information table for passwordand username. (in a MYSQl program; i will take care of that)2.) RetrivesUses password to retrieve matching diary entries from DiaryTableAs well as associated email address from Informationtable.3.) PublishesSend Diary results to browser--wrapped in HTML tableEmail orderly text version of results to user at email addressretrieve from Information table.Lifesaver to anyone who answers that.• Show less0 answers
- Anonymous askedIn this exercise, you code an application that displays thetelephone extension corresponding to the... Show moreIn this exercise, you code an application that displays thetelephone extension corresponding to the name selected in a listbox. The names and extensions are shown here:
Smith, Joe 3388
Adkari, Joel 3356
Lin, Sue 1111
Li, Vicky 2222
a. The items in the list should be sorted. Set the appropriateproperty.
b. Code the form's Load event procedure so that it adds the fivenames shown above to the xNamesListBox to a variable. Select thefirst name in the list.
c. Code the list box's SelectedValueChanged event procedure so thatit assigns the item selected in the xNamesListBox to a variable.The procedure then should use the Select Case statement to displaythe telephone extension corresponding to the name stored in thevariable.
• Show less0 answers
- Kpowerz askedI dont get anything about cla... Show moreHi, Can you please help me understanding classes and objectsin java?I dont get anything about classes.I would really appreciate the help.lifesaver will be given in minutes.• Show less1 answer
- Anonymous askedWrite a complete program that reads in 100 double values fromstandard input (keyboard) into a vecto... Show moreWrite a complete program that reads in 100 double values fromstandard input (keyboard) into a vector. Then output all 100 valuesto standard output (terminal) in the same order they were read inwith one space between each value.
Do not output anything other than the 100 values. That meansdo not prompt the user for input. Just assume the user knows toinput 100 doubles.
in c++
• Show less1 answer
- Anonymous askedRead the following description and implementthe following friend or member functions for a Matrix cl... Show moreRead the following description and implementthe following friend or member functions for a Matrix class:1) Make a member functionwith operator+. The operator+ member function should returna new matrix that is formed by adding the integer value X to eachelement.2) Add a newoperator+. The new member function should return a newmatrix that is formedby adding the current matrix andX.3) A new operator*.The friend function should return a new matrix, A, formed frommultiplying matrix B and Matrix C. If the matrices can bemultiplied, return true along with the computed A. Otherwise returnFalse and return Z as a matrix that contains all zeros in it'scells.• Show less0 answers
- Anonymous askedI want my user to be able to input real numbers like 2.5 forthe wave height, so I set... Show moreI'm using C...I want my user to be able to input real numbers like 2.5 forthe wave height, so I set my variable as a double. The programcompiles. I don't know why, but I don't enter my while loop whenthe user enters 0.0 or negative numbers. Here is my code...#include <stdio.h>
#include <stdlib.h>
#include<math.h>doubleheightfwave=0;printf("enter a real number forthe first wave height: ");
scanf(" %f", &heightfwave);
while (heightfwave <= 0)
{
printf("That is invalid.Please enter a real number greater than zero for thefirst wave height: ");
scanf("%f",&heightfwave);
}Any help would be greatly appreciated. Please don't laugh,this is my first programming class. ; | http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2009-march-14 | CC-MAIN-2013-48 | refinedweb | 1,348 | 66.33 |
Namespaces disambiguate elements with the same name from each other by assigning elements and attributes to URIs. Generally, all the elements from one XML application are assigned to one URI, and all the elements from a different XML application are assigned to a different URI. These URIs are called namespace names . The URIs partition the elements and attributes into disjoint sets. Elements with the same name but different URIs are different types. Elements with the same name and the same URIs are the same. Most of the time there's a one-to-one mapping between namespaces and XML applications, although a few applications use multiple namespaces to subdivide different parts of the application. For instance, XSL uses different namespaces for XSL Transformations (XSLT) and XSL Formatting Objects (XSL-FO).
Since URIs frequently contain characters such as / , % , and ~ that are not legal in XML names, short prefixes such as rdf and xsl stand in for them in element and attribute names. Each prefix is associated with a URI. Names whose prefixes are associated with the same URI are in the same namespace. Names whose prefixes are associated with different URIs are in different namespaces. Prefixed elements and attributes in namespaces have names that contain exactly one colon . They look like this:
rdf:description xlink:type xsl:template
Everything before the colon is called the prefix . Everything after the colon is called the local part . The complete name, including the colon, is called the qualified name , QName , or raw name . The prefix identifies the namespace to which the element or attribute belongs. The local part identifies the particular element or attribute within the namespace.
In a document that contains both SVG and MathML set elements, one could be an svg:set element, and the other could be a mathml:set element. Then there'd be no confusion between them. In an XSLT stylesheet that transforms documents into XSL formatting objects, the XSLT processor would recognize elements with the prefix xsl as XSLT instructions and elements with the prefix fo as literal result elements.
Prefixes may be composed from any legal XML name character except the colon. The three-letter prefix xml used for standard XML attributes such as xml:space , xml:lang , and xml:base is always bound to the URI and need not be explicitly declared. Other prefixes beginning with the three letters xml (in any combination of case) are reserved for use by XML and its related specifications. Otherwise, you're free to name your prefixes in any way that's convenient . One further restriction namespaces add to XML is that the local part may not contain any colons. In short, the only legal use of a colon in XML is to separate a namespace prefix from the local part in a qualified name.
Each prefix in a qualified name must be associated with a URI. For example, all XSLT elements are associated with the URI. The customary prefix xsl is used in place of the longer URI .
You can't use the URI in the name directly. For one thing, the slashes in most URIs aren't legal characters in XML names. However, it's occasionally useful to refer to the full name without assuming a particular prefix. One convention used on many XML mailing lists and in XML documentation is to enclose the URI in curly braces and prefix it to the name. For example, the qualified name xsl:template might be written as the full name {}template . Another convention is to append the local name to the namespace name after a sharp sign so that it becomes a URI fragment identifier. For example, . However, both forms are only conveniences for communication among human beings when the URI is important but the prefix isn't. Neither an XML parser nor an XSLT processor will accept or understand the long forms.
Prefixes are bound to namespace URIs by attaching an xmlns : prefix attribute to the prefixed element or one of its ancestors . (The prefix should be replaced by the actual prefix used.) For example, the xmlns:rdf attribute of this rdf:RDF element binds the prefix rdf to the namespace URI :
<rdf:RDF xmlns: <rdf:Description <title> Impressionist Paintings </title> <creator> Elliotte Rusty Harold </creator> <description> A list of famous impressionist paintings organized by painter and date </description> <date>2000-08-22</date> </rdf:Description> </rdf:RDF>
Bindings have scope within the element where they're declared and within its contents. The xmlns:rdf attribute declares the rdf prefix for the rdf:RDF element, as well as its descendant elements. An RDF processor will recognize rdf:RDF and rdf:Description as RDF elements because both have prefixes bound to the particular URI specified by the RDF specification. It will not consider the title , creator , description , and date elements to be RDF elements because they do not have prefixes bound to the URI.
The prefix can be declared in the topmost element that uses the prefix or in any ancestor thereof. This may be the root element of the document, or it may be an element at a lower level. For instance, the Dublin Core elements could be attached to the namespace by adding an xmlns:dc attribute to the rdf:Description element, as shown in Example 4-3, since all Dublin Core elements in this document appear inside a single rdf:Description element. In other documents that spread the elements out more, it might be more convenient to put the namespace declaration on the root element. If necessary, a single element can include multiple namespace declarations for different prefixes.
<?xml version="1.0" encoding="ISO-8859-1" standalone="yes"?> <catalog> > <painting> <title>Memory of the Garden at Etten</title> <artist>Vincent Van Gogh</artist> <date>November, 1888</date> <description> Two women look to the left. A third works in her garden. </description> </painting> <painting> <title>The Swing</title> <artist>Pierre-Auguste Renoir</artist> <date>1876</date> <description> A young girl on a swing. Two men and a toddler watch. </description> </painting> <!-- Many more paintings... --> </catalog>
A DTD for this document can include different content specifications for the dc:description and description elements. A stylesheet can attach different styles to dc:title and title . Software that sorts the catalog by date can pay attention to the date elements and ignore the dc:date elements.
In this example, the elements without prefixes, such as catalog , painting , description , artist , and title , are not in any namespace. Furthermore, unprefixed attributes (such as the about attribute of rdf:Description in the previous example) are never in any namespace. Being an attribute of an element in the namespace is not sufficient to put the attribute in the namespace. The only way an attribute belongs to a namespace is if it has a declared prefix, like rdf:about .
In XML 1.1 there's one exception to the rule that unprefixed attributes are never in a namespace. In XML 1.1/Namespaces 1.1, the xmlns attribute is defined to be in the namespace . In XML 1.0/Namespaces 1.0, the xmlns attribute is not in any namespace.
It is possible to redefine a prefix within a document so that in one element the prefix refers to one namespace URI, while in another element it refers to a different namespace URI. In this case, the closest ancestor element that declares the prefix takes precedence. However, in most cases, redefining prefixes is a very bad idea that only leads to confusion and is not something you should actually do.
In XML 1.1, you can also "undeclare" a namespace by defining it as having an empty ("") value.
Many XML applications have customary prefixes. For example, SVG elements often use the prefix svg , and RDF elements often have the prefix rdf . However, these prefixes are simply conventions and can be changed based on necessity, convenience, or whim. Before a prefix can be used, it must be bound to a URI like or . It is these URIs that are standardized, not the prefixes. The prefix can change as long as the URI stays the same. An RDF processor looks for the RDF URI, not any particular prefix. As long as nobody outside the w3.org domain uses namespace URIs in the w3.org domain, and as long as the W3C keeps a careful eye on what its people are using for namespaces, all conflicts can be avoided. XML application at the namespace URI. The W3C got tired of receiving broken-link reports for the namespace URIs in their specifications, so they added some simple pages at their namespace URIs. For more formal purposes that offer some hope of automated resolution and other features, you can place a Resource Directory Description Language (RDDL) document at the namespace URI. This possibility will be discussed further in Chapter 15..
Parsers compare namespace URIs on a character-by-character basis. If the URIs differ in even a single normally insignificant place, then they define separate namespaces. For instance, the following URLs all point to the same page:
However, only the first is the correct namespace name for the RDF. These four URLs identify four separate namespaces.
You often know that all the content of a particular element will come from a particular XML application. For instance, inside an SVG svg element, you're only likely to find other SVG elements. You can indicate that an unprefixed element and all its unprefixed descendant elements belong to a particular namespace by attaching an xmlns attribute with no prefix to the top element. For example:
<svg xmlns="" width="12cm" height="10cm"> <ellipse rx="110" ry="130" /> <rect x="4cm" y="1cm" width="3cm" height="6cm" /> </svg>
Here, although no elements have any prefixes, the svg , ellipse , and rect elements are in the namespace.
The attributes are a different story. Default namespaces only apply to elements, not to attributes. Thus, in the previous example, the width , height , rx , ry , x , and y attributes are not in any namespace.
You can change the default namespace within a particular element by adding an xmlns attribute to the element. Example 4-4 is an XML document that initially sets the default namespace to for all the XHTML elements. This namespace declaration applies within most of the document. However, the svg element has an xmlns attribute that resets the default namespace to for itself and its content. The XLink information is included in attributes, however, so these must be placed in the XLink namespace using explicit prefixes.
<?xml version="1.0"?> <html xmlns="" xmlns: <head><title>Three Namespaces</title></head> <body> <h1 align="center">An Ellipse and a Rectangle</h1> <svg xmlns="" width="12cm" height="10cm"> <ellipse rx="110" ry="130" /> <rect x="4cm" y="1cm" width="3cm" height="6cm" /> </svg> <p xlink: More about ellipses </p> <p xlink: More about rectangles </p> <hr/> <p>Last Modified May 13, 2000</p> </body> </html>
The default namespace does not apply to any elements or attributes with prefixes. These still belong to whatever namespace their prefix is bound to. However, an unprefixed child element of a prefixed element still belongs to the default namespace. | https://flylib.com/books/en/1.132.1.43/1/ | CC-MAIN-2020-34 | refinedweb | 1,853 | 54.73 |
Rendering 3D scenes.
Using three.js with Replit requires a little extra setup, but your site will be online immediately, making it easy to share with your friends.
Creating a new project in Replit
Head over to Replit and create a new repl. Choose HTML, CSS, JS as your project type. Give this repl a name, like "3D rendering".
Importing three.js to the project
Open the
script.js file in your repl. We'll import three.js by referencing it from a content distribution network (CDN). There are other ways of using three.js in a project, but this one will get us up and running the quickest.
Add the following line to the script file to import three.js from the Skypack CDN:
import * as THREE from '[email protected]';
You'll notice that we're using the
import keyword. This is a way of importing a new JavaScript
module package. To make this work, we need to change the default
script tag in the
index.html file to the following:
<script type="module" src="script.js"></script>
Notice we added the
type=module attribute to the script tag, which allows us to use module features in our script.
Now we are ready to use three.js in our project.
Creating a basic scene
To start, we'll add some basic built-in 3D shapes to a scene. The main steps are:
- Create a renderer, and attach it to an element on the web page.
- Create a new
Scenecontainer to hold all our 3D objects. We'll pass this scene to the
rendererwhenever we want to draw it.
- Create the geometry, or points that make up the "frame" of the object we want to render.
- Create a material, which is color and texture, to cover the frame of the object.
- Add the geometry and material to a "mesh" object, which is a 3D object that can be rendered.
- Add the mesh to the scene.
- Add a camera to the scene, which determines what we see rendered.
That's quite a few steps, so let's start by creating a renderer. Add the following lines to the
script.js file:
const renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
This sets up a new
WebGL renderer. WebGL is a browser technology that gives web developers access to the graphics cards in computers. The
setSize method sets the size of the renderer output to the size of the browser window by using the width and height values from the
window object. This way our scene will take up the entire browser window.
Next we'll create a new
Scene container. Add the following line to the
script.js file:
const scene = new THREE.Scene();
Its' time to create some 3D objects. We'll start with a cube. To create a cube, we'll need to create a
Geometry object. Add the following line to the
script.js file:
const boxGeometry = new THREE.BoxGeometry(3,3,3);
This gives us the geometry of a cube. The
BoxGeometry constructor takes three arguments: the width, height, and depth of the cube. Three.js has more built-in geometries, so let's add another shape to the scene. This time we'll add a torus, or donut shape. They always look cool in 3D:
const torusGeometry = new THREE.TorusGeometry(10, 3, 16, 100);
We've got the geometry, or points, of the 3D objects. Now we need to create a material to cover them with. You can think of the material as the skin of the object. Add the following line to the
script.js file:
const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 });
The MeshBasicMaterial is a simple material that covers the geometry with a solid color, in this case using the hexadecimal RGB code for pure green. You can also use a
Texture to cover the geometry with a texture.
The next step is combining the geometries and the material to make a mesh. Add the following lines to the
script.js file:
const cube = new THREE.Mesh(boxGeometry, material);
const torus = new THREE.Mesh(torusGeometry, material);
These meshes are what we'll add to the scene. We'll add the cube first, then the torus.
scene.add(cube);
scene.add(torus);
A camera determines what we see rendered, depending on where it is placed and where it is aimed. Add the following line to the
script.js file:
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
camera.position.z = 25;
We've got all the pieces we need to start rendering the scene. Now we just need to tell the renderer to draw the scene. Add the following line to the
script.js file:
renderer.render(scene, camera);
Now try running the code, by pushing the
Run button at the top of the Replit window. You should see your first scene, a green cube and torus:
Our scene doesn't look very "3D" yet, but we'll get there soon.
Animating a scene
Animating a scene or moving the camera can create more of a 3D effect. Let's add a little animation to our scene by rotating the torus and cube. In the
script.js file, replace
renderer.render(scene, camera); with the following lines:
function animate() {
torus.rotation.x += 0.01;
torus.rotation.y += 0.01;
cube.rotation.x += 0.01;
cube.rotation.y += 0.01;
renderer.render(scene, camera);
requestAnimationFrame(animate);
}
animate();
This creates a new function,
animate(), that will be called on every frame. We rotate the torus and cube by 0.01 radians around the objects' x and y axes using the
rotation property of each mesh. This is a handy method that saves us from calculating the rotation ourselves.
After we rotate the objects, we call the
renderer.render(scene, camera); method to draw the scene. This will cause the scene to be redrawn every frame, with the updated rotations.
The
requestAnimationFrame function is a built-in browser API call that will fire the
animate() function on the next frame. Each time
animate() is called,
requestAnimationFrame will call it again for the next frame. We call this function so that we can keep the animation running.
To kick off the animation for the first time, we call the
animate() function ourselves. Thereafter, it will keep itself running.
Press the "Run" button again and you should see the torus and cube rotating in the Replit window:
That looks a lot more 3D now!
Try changing up the material color and see what happens. You can also define different materials for the torus and cube, to make them look different.
Adding a model to the scene
We've created some basic 3D shapes programmatically. As you can imagine, building up a complex 3D world or character using this method would be very tedious. Fortunately, there are many 3D models available online, or perhaps you or a friend have played with making models in 3D animation applications like Blender. Three.js has a built-in loader to load these models into the scene.
To add the model loading functionality, we need to import it into our script. At the top of the
script.js file, just below the existing
import line, add the following:
import { GLTFLoader } from '[email protected]/examples/jsm/loaders/GLTFLoader.js';
This gives us the
GLTFLoader class, which we'll use to load the model. "glTF" stands for Graphics Language Transmission Format, and is widely used as a way to import and export 3D models from various 3D applications. All we need to know is that we can import any model that is saved in this format into our three.js applications. If you search for "free GLTF 3D models" on the web, you'll find a lot of sites where creators upload their models. Many are free to use in your projects, and some you need to pay for. We'll look for some free ones to experiment with.
Let's use this model of soda cans to start. Download the model, choosing the
glTF format. We've also included the model here, so you can download it easily.
Add the model to your repl by dragging the folder into the "Files" panel on the left.
We'll need to remove or comment out the previous code that drew the cube and torus. Remove the lines that create the cube and torus geometries, materials, and meshes, as well as the animation code. You should have only the following lines remaining:
import * as THREE from '[email protected]';
import { GLTFLoader } from '[email protected]/examples/jsm/loaders/GLTFLoader.js';
const renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
We need to add a few extra steps when loading a model. First, we need to create a new
GLTFLoader object. Add the following line to the
script.js file, just below the
scene variable line:
const loader = new GLTFLoader();
const fileName = './soda_cans/scene.gltf';
let model;
Here we've created a new loader object, and we've created a variable
fileName with the path to the soda can model we want to load. We also have a variable
model that will hold the loaded model, which we can manipulate later.
Now for the actual loading code. We'll use the
load method of the loader. Add the following lines to the
script.js file, below the code we've just added:
loader.load(fileName, function (gltf) {
model = gltf.scene;
scene.add(model);
}, undefined, function (e) {
console.error(e);
});
The
load method takes a few parameters:
- the path to the model,
- a callback function that will be called when the model is loaded,
- a loading progress callback function, and
- an error callback function that will be called if there is an error loading the model.
We supply the
undefined value for the progress callback, as we don't need it for this example, although it is a nice touch in a production application to give feedback to the user.
This alone won't always make a model visible on the screen. This is because a model may have no lighting, or the material may not be self-illuminating, or the model may be too large or too small to be visible from our default camera angle. To account for these possibilities, we'll include some helper functions to add lighting, adjust the model's position, and set the camera's position and angle.
Let's start with adding some lighting. Add the following function to the
script.js file:
function addLight() {
const light = new THREE.DirectionalLight(0xffffff, 4);
light.position.set(0.5, 0, 0.866);
camera.add(light);
}
This function will add a directional light with a white color to the scene, at a position slightly offset from the camera. We attach the light to the camera so that it is always shining at whatever the camera is looking at.
The second helper function adjusts the positions of the model and the camera. Add the following function to the
script.js file:
function adjustModelAndCamera() {
const box = new THREE.Box3().setFromObject(model);
const size = box.getSize(new THREE.Vector3()).length();
const center = box.getCenter(new THREE.Vector3());
model.position.x += (model.position.x - center.x);
model.position.y += (model.position.y - center.y);
model.position.z += (model.position.z - center.z);
camera.near = size / 100;
camera.far = size * 100;
camera.updateProjectionMatrix();
camera.position.copy(center);
camera.position.x += size / 0.2;
camera.position.y += size / 2;
camera.position.z += size / 100;
camera.lookAt(center);
}
This function works by finding the bounding box of the model. The bounding box is the smallest box that can contain all the vertices of the model. We can then use this box to set the camera's near and far clipping planes, and also to adjust the position of the model and the camera. Clipping planes are used to determine what is visible in the camera's view. The near plane is the closest distance from the model that the camera can "see". The far plane is the furthest distance the camera can "see". This is used to determine what is visible in the camera's view. We use
camera.updateProjectionMatrix to recalculate the camera's internal parameters.
We center the camera on the model, and then adjust the camera's position and angle to make sure the model is visible. We also point the camera to the center of the model using the
lookAt method.
Now let's call these new functions from the loader's callback function. We'll also render the scene after this setup. Update the
loader.load callback function as follows:
loader.load(fileName, function (gltf) {
model = gltf.scene;
scene.add(model);
addLight();
adjustModelAndCamera();
scene.add(camera);
renderer.render(scene, camera);
}, undefined, function (e) {
console.error(e);
});
You'll notice that, along with calls to the new function, we added in an extra line
scene.add(camera). This is because we added the light to the camera to follow it around. A light is part of the scene, so we add the camera with the light attached to our scene.
If you run the code, you'll see that the model is now visible in the scene. However, it's a side-on view and a bit far away.
Adding controls to the scene
To be able to see and inspect the model better, we can add some mouse controls to the scene so that we can zoom in or rotate the model. Three.js has a built-in
OrbitControls class that we can use.
First, add the following import code to the top of the
script.js file, along with the other import statements:
import { OrbitControls } from '[email protected]/examples/jsm/controls/OrbitControls.js';
To initiate the orbit controls, we'll need to add the following code to the
script.js file, after the renderer and camera have been created:
const controls = new OrbitControls(camera, renderer.domElement);
controls.screenSpacePanning = true;
This creates a new controls object, and specifies what object it controls, the
camera, and the DOM element the controls should listen to mouse inputs from. We also set the
screenSpacePanning property to
true, which allows us to pan the camera around the model.
The controls change the view of the model as we move around it, so we need to add a modified
animate function to redraw the scene each frame. Add the following code to the
script.js file:
function animate() {
requestAnimationFrame(animate);
controls.update();
renderer.render(scene, camera);
}
Now replace the
renderer.render(scene, camera); line in the
loader.load callback function with the following call to the
animate function to start it off;
animate();
Save and run the project. Now you can try using the mouse to rotate the model and zoom in and out.
Next Steps
Now that you know how to build a simple 3D scene using three.js, you might like to explore three.js and 3D rendering a little more. Head over to the three.js documentation to learn more about the tool and see other examples of what you can do with it. You can also download and try Blender to create your own 3D models. | https://docs.replit.com/tutorials/3D-rendering-with-threejs | CC-MAIN-2022-27 | refinedweb | 2,562 | 67.76 |
[ ]
Thorsten Scherler commented on FOR-768:
---------------------------------------
We dropped the <forrest:css/> element because is was to much html
specific. We decided to use a contract for the same functionality:
<forrest:contract
<forrest:property
<css url="common.css"/>
</forrest:property>
</forrest:contract>
It is working like described from Cyriaque because I used the same code
on which he is basing his description. The only difference is like you
see from the sample that I had to drop the forrest: namespace.
So I reckon you can go into any detail. I will add this documentation as well to the contract.
One reason why we are using contracts is that you can have the documentation about the contract
within the contract. ;-)
BTW can it be that you forgot to attach the patch, or did I missunderstood you.
> Added : | http://mail-archives.apache.org/mod_mbox/forrest-dev/200512.mbox/%3C1788210321.1134842496089.JavaMail.jira@ajax.apache.org%3E | CC-MAIN-2015-48 | refinedweb | 135 | 55.34 |
DIY Boat Monitoring Software
I have posted the setup function that controls the entire monitor below. You can see from the image above that there are many modules called by this main thread. The code executes once and then sleeps. Each module has a function, for example reading the voltage. I believe I have given enough information in the course of this article for you to write the modules that will do the things you want to do for your boat. The important thing is to have a structure to put the modules in and I am posting that, without any warranty etc etc. This project is your responsibility and you will have to make it work. I am happy to answer questions. Just use the CONTACT form.
The two significant modules are the WiFi send module which you can basically get from the link I gave in the WiFi section and the Voltage Measurement HERE. If you want to send email with a G2 netwrok please read the article on Sending email with ESP32-SIM800 HERE.
#include "ESP32_MailClient.h" #include
#include #include #include "definitions.h" //#define DEBUG #define MOREEMAIL #define VERSION "11" #define UNIT 2 // note, time goal was changed. 2 changes /********************************************************* * HERE is the program *********************************************************/ void setup(){ setupModes(); delay(100); // wait for serial to process digitalWrite(LED_BUILTIN, ON); // measure time and check if just powered up time(&now);// the timer count for now in seconds if( now < 100) delay(2000); //let things settle when power first applied //Print the wakeup reason for ESP32 wakeup_reason = esp_sleep_get_wakeup_cause(); print_wakeup_reason(wakeup_reason); readInputs(); if( checkMainPower() == ON){// basically turn unit off if going sailing sleepTime(); } checkFan(); //Test for problems. A problem will send an email on the next half hour or sooner if it was from an interrupt checkShorePower(); checkBattery(); if( checkPumps()== ON){ Serial.println("Large Pump went on"); } // test if time of day to send status Serial.println("time is " + String(now)); Serial.println("target time is " + String(targetTime)); if (now >= targetTime){ //this is the once a day message, clear problem sent flats clearFlags = TRUE; } if(now >= targetTime || problem){ targetTime = sendEmail(voltageBat1,voltageBat2,pumpRunTime,message); Serial.print("Send this many hours from now: "); Serial.println( (double)(targetTime - now) / (60*60)); } else Serial.println("not time to send email yet"); /* The unit will sleep for half an hour (typically) if the email was sent. If the email was not sent, it will wait 5 minutes. It will retry MAX_RETRY times and then give up. When the unit wakes up past the target email time, and email will be sent. If the email was not sent, the target time will cause an emial to be sent. */ sleepTime(); } void loop() { //nothing in here as sketch never gets here. It goes to sleep. }
Beginning⇦
Please read website Cookie, Privacy, and Disclamers by clicking HERE. | https://l-36.com/DIY-software | CC-MAIN-2021-43 | refinedweb | 467 | 64 |
Managing the Compute Grid¶
Overview¶
Users in Domino assign their Runs to Domino Hardware Tiers. A hardware tier defines the type of machine a job will run on, and the resource requests and limits for the pod that the Run will execute in. When configuring a hardware tier, you will specify the machine type by providing a Kubernetes node label.
You should create a Kubernetes node label for each type of node you want available for compute workloads in Domino, and apply it consistently to compute nodes that meet that specification. Nodes with the same label become a node pool, and they will be used as available for Runs assigned to a Hardware Tier that points to their label.
Which pool a Hardware Tier is configured to use is determined by the value in the Node Pool field of the Hardware
Tier editor. In the screenshot below, the
large-k8s Hardware Tier is configured to use the
default node pool.
The diagram below shows a cluster configured with two node pools for Domino, one named
default and one named
default-gpu. You can make additional node pools available to Domino by labeling them with the same scheme:
dominodatalab.com/node-pool=<node-pool-name>. The arrows in this diagram represent Domino requesting that a node
with a given label be assigned to a Run. Kubernetes will then assign the Run to a node in the specified pool that has
sufficient resources.
By default, Domino creates a node pool with the label
dominodatalab.com/node-pool=default and all compute nodes
Domino creates in cloud environments are assumed to be in this pool. Note that in cloud environments with automatic node
scaling, you will configure scaling components like AWS Auto Scaling Groups or Azure Scale Sets with these labels to
create elastic node pools.
Kubernetes pods¶
Every Run in Domino is hosted in a Kubernetes pod on a type of node specified by the selected Hardware Tier.
The pod hosting a Domino Run contains three containers:
- The main Run container where user code is executed
- An NGINX container for handling web UI requests
- An executor support container which manages various aspects of the lifecycle of a Domino execution, like transferring files or syncing changes back to the Domino file system
Resourcing requests¶
The amount of compute power required for your Domino cluster will fluctuate over time as users start and stop Runs. Domino relies on Kubernetes to find space for each execution on existing compute resources. In cloud autoscaling environments, if there’s not enough CPU or memory to satisfy a given execution request, the Kubernetes cluster autoscaler will start new compute nodes to fulfill that increased demand. In environments with static nodes, or in cloud environments where you have reached the autoscaling limit, the execution request will be queued until resources are available.
Autoscaling Kubernetes clusters will shut nodes down when they are idle for more than a configurable duration. This reduces your costs by ensuring that nodes are used efficiently, and terminated when not needed.
Cloud autoscaling resources have properties like the minimum and maximum number of nodes they can create. You should set the node maximum to whatever you are comfortable with given the size of your team and expected volume of workloads. All else equal, it is better to have a higher limit than a lower one, as nodes are cheap to start up and shut down, while your data scientists’ time is very valuable. If the cluster cannot scale up any further, your users’ executions will wait in a queue until the cluster can service their request.
The amount of resources Domino will request for a Run is determined by the selected Hardware Tier for the Run. Each Hardware Tier has five configurable properties that configure the resource requests and limits for Run pods.
Cores
The number of requested CPUs.
Cores limit
The maximum number of CPUs. Recommended to be the same as the request.
Memory
The amount of requested memory.
Memory limit
The maximum amount of memory. Recommended to be the same as the request.
Number of GPUs
The number of GPU cards available.
The request values, Cores and Memory, as well as Number of GPUs, are thresholds used to determine whether a node has capacity to host the pod. These requested resources are effectively reserved for the pod. The limit values control the amount of resources a pod can use above and beyond the amount requested. If there’s additional headroom on the node, the pod can use resources up to this limit.
However, if resources are in contention, and a pod is using resources beyond those it requested, and thereby causing excess demand on a node, the offending pod may be evicted from the node by Kubernetes and the associated Domino Run is terminated. For this reason, Domino strongly recommends setting the requests and limits to the same values.
User Executions Quota¶
To prevent a single user from monopolizing a Domino deployment, an administrator can set a limit on the number of simultaneous executions that a user can have running concurrently. Once the number of simultaneously running executions is reached for a given user, any additional executions will be queued. This includes executions for Domino workspaces, jobs, web applications, as well as any executions that make up an on-demand distributed compute cluster. For example, in the case of an on-demand Spark cluster an execution slot will be consumed for each Spark executor and for the master.
See Important settings for details.
Common questions¶
How do I view the current nodes in my compute grid?¶
From the top menu bar in the admin UI, click Infrastructure. You will see both Platform and Compute nodes in this
interface. Click the name of a node to get a complete description, including all applied labels, available resources,
and currently hosted pods. This is the full
kubectl describe for the node. Non-Platform nodes in this interface with a value in the Node Pool column are compute nodes that can
be used for Domino Runs by configuring a Hardware Tier to use the pool.
How do I view details on currently active executions?¶
From the top menu of the admin UI, click Executions. This interface lists active Domino execution pods and shows the
type of workload, the Hardware Tier used, the originating user and project, and the status for each pod. There are also
links to view a full
kubectl describe output for the pod and the node, and an option to download
the deployment lifecycle log for the pod generated by Kubernetes and the Domino application.
How do I create or edit a Hardware Tier?¶
From the top menu of the admin UI, click Advanced > Hardware Tiers, then on the Hardware Tiers page click New to create a new Hardware Tier or Edit to modify an existing Hardware Tier.
Keep in mind that your a Hardware Tier’s CPU, memory, and GPU requests should not exceed the available resources of the machines in the target node pool after accounting for overhead. If you need more resources than are available on existing nodes, you may need to add a new node pool with different specifications. This may mean adding individual nodes to a static cluster, or configuring new auto-scaling components that provision new nodes with the required specifications and labels.
Important settings¶
The following settings in the
common namespace of the Domino central configuration affect compute grid behavior.
Deploying state timeout¶
- Key:
com.cerebro.computegrid.timeouts.sagaStateTimeouts.deployingStateTimeoutSeconds
- Value: Number of seconds an execution pod in a deploying state will wait before timing out. Default is 60 * 60 (1 hour).
Preparing state timeout¶
- Key:
com.cerebro.computegrid.timeouts.sagaStateTimeouts.preparingStateTimeoutSeconds
- Value: Number of seconds an execution pod in a preparing state will wait before timing out. Default is 60 * 60 (1 hour).
Maximum executions per user¶
- Key:
com.cerebro.domino.computegrid.userExecutionsQuota.maximumExecutionsPerUser
- Value: Maximum number of executions each user may have running concurrently. If a user tries to run more than this, the excess executions will queue until existing executions finish. Default is 25.
Quota state timeout¶
- Key:
com.cerebro.computegrid.timeouts.sagaStateTimeouts.userExecutionsOverQuotaStateTimeoutSeconds
- Value: Number of seconds an execution pod that cannot be assigned due to user quota limitations will wait for resources to become available before timing out. Default is 24 * 60 * 60 (24 hours). | https://admin.dominodatalab.com/en/4.1/compute/compute-grid.html | CC-MAIN-2020-34 | refinedweb | 1,393 | 52.9 |
This gewts lots of Google hits, that in itself may be sufficient reason to think up a different one. Two software"..'re writing C/C++ using GCC, test-compile with -Wall and clean up all warning messages before each release. Compile your code with every compiler you can find — different compilers often find different problems. Specifically, compile your software on true 64-bit machine. Underlying data types can change on 64-bit machines, and you will often find new problems there. Find a UNIX vendor's system and run the lint utility over your software.
Run tools that for memory leaks and other run-time errors; Electric Fence and Valgrind are two good ones available in open source.-day).
Avoid using complex types such as "off_t" and "size_t". They vary in size from system to system, especially on 64-bit systems. Limit your usage of "off_t" to the portability layer, and your usage of "size_t" to mean only the length of a string in memory, and nothing else.
Never step on the namespace of any other part of the system, (including file names, error return values and function names). Where the namespace is shared, document the portion of the namespace that you use.. SRC may also contain names of subdirectories to be included whole..
An older, still-acceptable convention for this is to name it CREDITS.
Provide checksums with your binaries (tarballs, RPMs, etc.). This will allow people to verify that they haven't been corrupted or had Trojan-horse code inserted in them.
While there are several commands you can use for this purpose (such as sum and cksum) it is best to use a cryptographically-secure hash function. The GPG package provides this capability via the —detach-sign option; so does the GNU command md5sum.
For each binary you ship, your project web page should list the checksum and the command you used to generate.). | http://ldp.indosite.co.id/HOWTO/html_single/Software-Release-Practice-HOWTO/index.html | crawl-003 | refinedweb | 317 | 63.8 |
Please generalize math.hypot. While I don't have a survey of python
codes, it seems to me unlikely for this change to break existing
programs.
import math
def hypot(*args):
'''
Return the Euclidean vector length.
>>> from math import hypot, sqrt
>>> hypot(5,12) # traditional definition
13.0
>>> hypot()
0.0
>>> hypot(-6.25)
6.25
>>> hypot(1,1,1) == sqrt(3) # diagonal of unit box
True
'''
return math.sqrt(sum(arg*arg for arg in args))
I propose this version as closest to:
>>> print sys.version
2.5.1 (r251:54863, Jan 4 2008, 17:15:14)
[GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)]
>>> print math.hypot.__doc__
hypot(x,y)
Return the Euclidean distance, sqrt(x*x + y*y).
Thanks,
Dave.
PS. I don't understand why python is so restrictive. Although hypot
is in the math library, it could be written in EAFP style as
def hypot(*args):
return sum(arg*arg for arg in args)**0.5
Rather than review the entire python library for items to generalize,
I'll accept that the resulting errors would confuse "the penguin on my
tele". "hypot" crosses me most often. I've not yet needed a version
in the complex domain, such as my second version.
I typically fill my need for length with scipy.sqrt(scipy.dot(v,v)),
only to realize that for the short vectors I use, standard python
constructs always perform faster than scipy | https://bugs.python.org/msg61374 | CC-MAIN-2021-04 | refinedweb | 245 | 69.38 |
Background:
I've been interested in pacman delta support for some time now, I made a post a few years back about it here, but never really followed up with it.
Anyways.. the arch developers have added support for delta packages, the only thing that is missing is a delta repository. So I've decided to make an arch linux delta mirror.
Usage:
1) edit /etc/pacman.d/mirrorlist adding the following as the top repository
#Only the i686 versions of core, extra, community repos are supported Server =
2) edit /etc/pacman.conf adding "UseDelta" to the [options] section
3) reply to this thread posting your `pacman -Syuw` "Total Download Size" with and without "UseDelta"
My results:
home: 635.20MB => 227.25MB (36% of reg) work: 739.77MB => 301.14MB (40% of reg)
...
Thats all there is to it really
NOTES:
* This repository is a delta only repository.. which means if you want a package that has no delta files you will get an error message about the pkg not being found. However, pacman will automatically fall back to the next mirror and grab it from there. So please make sure that archdelta.net isn't the only mirror configured in pacman.
* I am only keeping up to 3 versions back for each package. Also, delta size combined must be less than %70 of the pkg size which is the pacman cutoff.
* The repository starts syncing at ~3:30am EST with the most recently synced mirrors. (if there's an official master repository I should be syncing to, please let me know)
Numbers and technical details...:
The compression used to generate the delta files is xdelta -9 + gzip 9 compression (using the python gzip library).
xdelta -9 + gzip seems to provide excellent compression and performance. The reason I went with gzip 9 is because it was the default in python.
I'm using xdelta3 (3.0v-2) because the 3.0w version has a bug where it doesn't decompress input files correctly (netting a delta thats almost the same size as the package).
Here's some compression results using different options (syntax name.<xdelta_opts>.delta.<compression_opts>.{gz,xz,bzip2}):
kernel:
2164 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.bsdiff <= bsdiff wins.. no surprise there 2908 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9.delta.9e.xz 2912 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9.delta.9.xz 2912 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9.delta.xz 2928 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.delta.9e.xz 2928 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.delta.9.xz 2928 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.delta.xz 2996 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9.delta.9py.gz <= this is what the repository uses 3008 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9.delta.9.gz 3008 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9.delta.gz 3024 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.delta.gz 3084 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9djw.delta.9e.xz 3084 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9djw.delta.9.xz 3084 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9djw.delta.xz 3092 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.djw.delta.9.xz 3092 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.djw.delta.xz 3096 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9djw.delta.gz 3096 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.djw.delta.9e.xz 3108 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.djw.delta.gz 3116 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9.delta.bz2 3124 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9djw.delta.bz2 3132 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.djw.delta.bz2 3136 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9djw.delta 3136 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.delta.bz2 3148 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.djw.delta 3216 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.0.delta.9e.xz 3216 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.0.delta.9.xz 3216 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.0.delta.xz 3356 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.9.delta 3372 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.delta 3404 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.0.delta.gz 3588 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.0.delta.bz2 4304 kernel26-2.6.32.7-1_to_-2.6.32.8-1-i686.0.delta 29284 kernel26-2.6.32.7-1-i686.pkg.tar.gz 29288 kernel26-2.6.32.8-1-i686.pkg.tar.gz
evolution:
400 evolution-2.28.1-1_to_2.28.2-1-i686.bsdiff <= o.O more bsdiff goodness 1120 evolution-2.28.1-1_to_2.28.2-1-i686.9.delta.9e.xz 1120 evolution-2.28.1-1_to_2.28.2-1-i686.9.delta.9.xz 1120 evolution-2.28.1-1_to_2.28.2-1-i686.9.delta.xz 1196 evolution-2.28.1-1_to_2.28.2-1-i686.9.delta.9py.gz <= this is what the repository uses 1204 evolution-2.28.1-1_to_2.28.2-1-i686.9.delta.9.gz 1204 evolution-2.28.1-1_to_2.28.2-1-i686.9.delta.gz 1236 evolution-2.28.1-1_to_2.28.2-1-i686.9djw.delta.9.xz 1236 evolution-2.28.1-1_to_2.28.2-1-i686.9djw.delta.xz 1240 evolution-2.28.1-1_to_2.28.2-1-i686.9djw.delta.9e.xz 1256 evolution-2.28.1-1_to_2.28.2-1-i686.9.delta.bz2 1256 evolution-2.28.1-1_to_2.28.2-1-i686.9djw.delta.gz 1264 evolution-2.28.1-1_to_2.28.2-1-i686.9djw.delta.bz2 1272 evolution-2.28.1-1_to_2.28.2-1-i686.9djw.delta 1380 evolution-2.28.1-1_to_2.28.2-1-i686.delta.9e.xz 1384 evolution-2.28.1-1_to_2.28.2-1-i686.delta.9.xz 1384 evolution-2.28.1-1_to_2.28.2-1-i686.delta.xz 1440 evolution-2.28.1-1_to_2.28.2-1-i686.0.delta.9e.xz 1440 evolution-2.28.1-1_to_2.28.2-1-i686.0.delta.9.xz 1440 evolution-2.28.1-1_to_2.28.2-1-i686.0.delta.xz 1460 evolution-2.28.1-1_to_2.28.2-1-i686.9.delta 1484 evolution-2.28.1-1_to_2.28.2-1-i686.delta.gz 1520 evolution-2.28.1-1_to_2.28.2-1-i686.djw.delta.9e.xz 1520 evolution-2.28.1-1_to_2.28.2-1-i686.djw.delta.9.xz 1520 evolution-2.28.1-1_to_2.28.2-1-i686.djw.delta.xz 1540 evolution-2.28.1-1_to_2.28.2-1-i686.djw.delta.gz 1548 evolution-2.28.1-1_to_2.28.2-1-i686.delta.bz2 1552 evolution-2.28.1-1_to_2.28.2-1-i686.djw.delta.bz2 1564 evolution-2.28.1-1_to_2.28.2-1-i686.djw.delta 1684 evolution-2.28.1-1_to_2.28.2-1-i686.0.delta.gz 1712 evolution-2.28.1-1_to_2.28.2-1-i686.0.delta.bz2 1772 evolution-2.28.1-1_to_2.28.2-1-i686.delta 2832 evolution-2.28.1-1_to_2.28.2-1-i686.0.delta 28944 evolution-2.28.1-1-i686.pkg.tar.gz 28952 evolution-2.28.2-1-i686.pkg.tar.gz
NOTE: If you're wondering why I'm not using bsdiff even though it has the best compression, there's a few reasons:
1) pacman would need to be patched to support it (not difficult...)
2) bsdiff files don't contain metadata about the source/target files, so they would need to be packaged with some sort of metadata file. (again not difficult but requires repo-add to be modified)
3) bsdiff is much slower than xdelta to generate diffs, and requires INSANE amounts of ram (up to 17x the size of destination file)
That said.. if bsdiff ever gets supported by pacman.. Then it might not be a bad idea to use bsdiff for packages under a specific size.
Questions, todo, feedback, etc..:
I'll add notes to this section as they come..
I'll start with my question(s):
Is there an official archlinux repository that I should be syncing to? If so.. who do I need to talk to about that?
edit: If you are having xdelta3 errors, try the 3.0v-2 version of xdelta, I've posted a copy of the package at archdelta.net
Last edited by sabooky (2010-04-02 00:38:10)
Offline
with delta:
::: 13.00 MB
Total Installed Size: 102.17 MB
and without:
::: 29.78 MB
Total Installed Size: 102.17 MB
Offline
Is there an official archlinux repository that I should be syncing to? If so.. who do I need to talk to about that?
There is... but any good mirror should be fine. We encourage new mirrors to sync of another local mirror these days anyway.
Something that might also interest you: … d=e46bb09f
Edit: good work BTW! This is one thing I want to get officially implemented but there are a couple of repo management things that need improved with a higher priority.
Offline
Great work!
My link is 32kb/s at most so this is exactly what I was hoping for.... Thank you.
with pacman -Syu I get this:
Proceed with installation? [Y/n] :: Retrieving packages from extra... cdrdao-1.2.3-2_to_1... 0.6K 433.9K/s 00:00:00 [################################################################] 100% unzip-6.0-4_to_6.0-... 41.1K 49.6K/s 00:00:01 [################################################################] 100% checking delta integrity... applying deltas... generating bind-9.6.1.P3-2-i686.pkg.tar.xz with bind-9.6.1.P3-1_to_9.6.1.P3-2-i686.delta... xdelta3: seek to 0 failed: /var/cache/pacman/pkg/bind-9.6.1.P3-1-i686.pkg.tar.gz: Illegal seek xdelta3: non-seekable source: copy is too far back (try raising -B): XD3_INTERNAL failed.
Can I set this -B option somewhere?
James
Update:
I Changed to xdelta3.0v2.
I get 'success!' generating the packages (no 'success!' with zlib delta).
The packages seem to be corrupted:
Proceed with download? [Y/n] :: Retrieving packages from core... error: failed retrieving file 'zlib-1.2.3.9-1-i686.pkg.tar.xz' from archdelta.net : Not Found zlib-1.2.3.9-1-i686... 90.9K 37.2K/s 00:00:02 [################################################################] 100% checking package integrity... :: File boost-1.42.0-1-i686.pkg.tar.gz is corrupted. Do you want to delete it? [Y/n] n :: File openssl-0.9.8m-1-i686.pkg.tar.xz is corrupted. Do you want to delete it? [Y/n] n :: File libmysqlclient-5.1.44-1-i686.pkg.tar.xz is corrupted. Do you want to delete it? [Y/n] n :: File mysql-clients-5.1.44-1-i686.pkg.tar.xz is corrupted. Do you want to delete it? [Y/n] n
etc.
I looked at both 'filesystem-2010.02-4-any.pkg.tar.xz' on a server and in the local cache.
My local copy is not compressed. ('cat [file from server] | xz -d > [another file] ' creates the same as the file generated by pacman.)
anybody with same problem?
Monologue part 3:
I was able to use all the xz files by renaming them to *.tar and running xz -z on them.
gz files still corrupted with both gzip -9 and python gzip
I used this for the gz files:
import gzip import sys f_in = open(sys.argv[1], 'rb') f_out = gzip.open(sys.argv[2], 'wb') f_out.writelines(f_in) f_out.close() f_in.close()
I hope there will soon be official deltas in the repositories.
For now I'll do
#UseDelta IgnorePkg = go-openoffice
good night
Last edited by james (2010-03-05 23:18:41)
Offline
xdelta is definitely something we want to add to our repos in future. But as Allan stated we need to add other improvements ot our repository/mirror structure first. (xdelta will increase load/disk space and traffic on our main server and between mirrors).
Any input and experience on this are welcome. Even more if you are trying to use "our" tools like repo-add (part of pacman) or devtools/dbscripts:
Offline
This is cool, great work. Can we get one for x86_64 now?
Offline
(…) bsdiff (…)
While looking up version control stuff and binary diffing for my thesis, I found out about bdiff which is supposedly faster to create diffs and easier on memory than bsdiff. That being said, I haven't tried it yet,and I didn't even know about xdelta at the time…
Last edited by Runiq (2010-03-03 20:55:12)
Offline
zsync is nice too. not as efficient but very nice to server space. very easy to handle. no deltas at all.
with some effort it might even get as nice as courgette or even nicer … -courgette
james
Offline
This is cool, great work. Can we get one for x86_64 now?
I'd like it too...
Can I lend a hand?
Offline
@Allan
The unused deltas feature looks nice, once its in the stable release I'll take a look at it. I'm not sure the issue will come up in my case though, since the script that generates deltas also cleans up old ones.
@james
For the corrupt gzip packages try using the '-n' flag when compressing. gzip by default stores timestamps.. so if they change, it changes the md5 hash of the package.. weird that your cache files are uncompressed.. no idea why thats happening (maybe a pacman option?)
google-Courgette looks very interesting.. Hope they release it soon.
@Pierre
How are patches sent in? just use git format-patch and email it?
@cookiecaper and pie86
The only reason there isn't an x86_64 repo is because I don't have the hd space (locally) to do it.
As for helping out.. hmm.. let me explain how the scripts work on my end.
This way, anyone that wants to help out can have an idea of what it takes to host this.
Warning: this is messy and all over the place.. but it works, and thats all that matters :P
I have a script that drives all of this sync.sh, when its run it does the following:
1) uses reflector to get the top 10 fastest mirrors that synced within 2 hours.
2) wget the repo.db.tar.gz file
3) generates a metalinker file out of repo.db.tar.gz that uses aria2c to download
* this speeds up downloads by using 10 mirrors and takes care of md5sum validation on the downloaded packages.
* that reminds me.. vimpager package is broken, repo.db.tar.gz contains a different md5 and package size.
If there are changes, then the following steps are also done:
4) deltify.py is run, which creates deltas, removes old/oversized deltas, removes old packages.
5) cp repo.db.tar.gz from repo dir to deltas dir
6) repo-add $repo.db.tar.gz *.delta
* I've modified my local copy of repo-add to improve performance.
* it generates a cache file, so it doesn't have to get the md5, old_ver, new_ver for previously processed deltas.
7) lftp pushes changes to archdelta.net
Here's HD usage locally on my box:
# du -hcs repos/*/os/i686
7.3G repos/community/os/i686
270M repos/core/os/i686
11G repos/extra/os/i686
18G total
# du -hcs repos/*/os/i686/deltas
217M repos/community/os/i686/deltas
49M repos/core/os/i686/deltas
2.2G repos/extra/os/i686/deltas
2.5G total
Notes:
* feedback on whether there's a cleaner way to do this would be nice.
* I wrote a repo2ml.py which generates metalink file out of repository, but am not using it currently.
Need to test and use this rather than using bash to generate the metalink.
Need to post this, since this could be a pretty useful standalone tool for some.
* Where do I file a bug on vimpager, the community.db.tar.gz md5sum and size do not match whats in the repository.
Offline
@sabooky
I don't know if you received my email, anyway I think I could have a machine that does the syncing job for 64 bit packets.
If you can host also 64 bit pkgs on archdelta.net I would be glad to help "completing" your repository
Offline
If you wouldn't mind some time, could you send your "thoughts and experiences" type things to the pacman-dev mailing list? It would be great to hear how this stuff is working out in the wild and what we need to do to get it more widely used and live, especially before we commit to design decisions that you may be finding no-good as you roll this out.
Offline
sabooky, thanks for pushing this new feature
Offline
Hi sabooky,
Thanks for this. delta would save lot of time and bandwidth for me.my internet connect is just 256kbps. so saving every mb of download translated in huge savings.
Thanks a Ton! you deserve a beer, I would buy you one :-)
Offline
How often is your mirror being synced? I tried it but I got some 50 warnings about packages on my system being newer than core.. My usual mirror is mir.archlinux.fr
Thanks for putting effort into this anyhow, this might get something bigger started
Offline
@ramses
oops.. I made a change to my dreamhost.net account, and didn't realize I manged to accidentally change my account login credentials.
The repository hasn't been synced since 17-Mar-2010 due to script failing to push updates because of a Login error.
I fixed the login information and forced a sync just now, the repository should be up to date.
The repository will continue to sync on the regularly scheduled time (3:30am est).
Sorry for the inconvenience.
Thanks for bringing this up to my attention.
Note: Lesson of the day... don't let scripts silently die, have them send emails... I'll make this change sometime this weekend.
Offline
Mmm, I like the idea of delta files very much. Thanks!
Would it be possible for you to support testing and community-testing repos? deltas seems to fit those especially nice, as they're often just some rebuilds or minor PKGBUILD changes.
Offline
I added the testing and community-testing repos. However, I didn't use the ARM to get previous versions, so the repo will start out with 0 deltas and will grow as time goes on.
There's a chance I might add x86_64 support soon, I just need to move the storage off to my freebsd server machine (it has more storage space). If i do a x86_64 repo, I'll probably do the same thing that I did today, basically start with a repo with 0 deltas and let it grow naturally.
I'll post my "thoughts and experiences" to the pacman-dev. Though I'm not sure how much of what I've done here applies to the arch developers workflow.
There are two things I'd like to see implemented:
1. have delta files wrapped in a tar.gz file containing a metadata file, something like .PKGINFO (.DELTAINFO?). This makes it so the delta system isn't dependent on the xdelta3 format.
2. support multiple delta formats, even if this isn't done initially having #1 done will allow for this to be added later.
That said, I would be interested in contributing to the development of the delta support, I'm guessing db-update is where this would need to be added.
Is there a design doc already written for the planned implementation of this? What's the process for submitting patches for the arch linux projects (pacman, dbscripts, etc..)?
Thanks
Offline
Ok I pulled the first deltas in today
However, I get this:
:: Starting full system upgrade... warning: gsfonts: local (8.11-5) is newer than extra (1.0.7pre44-1) resolving dependencies... looking for inter-conflicts... Targets (4): libcups-1.4.2-5 [0.25 MB] cups-1.4.2-5 [1.75 MB] openssl-0.9.8n-1 [1.96 MB] php-suhosin-0.9.30-1 [0.06 MB] Total Download Size: 0.00 MB Total Installed Size: 21.78 MB Proceed with installation? [Y/n] y checking delta integrity... applying deltas... generating libcups-1.4.2-5-i686.pkg.tar.xz with libcups-1.4.2-4_to_1.4.2-5-i686.delta... success! generating cups-1.4.2-5-i686.pkg.tar.xz with cups-1.4.2-4_to_1.4.2-5-i686.delta... xdelta3: seek to 6291456 failed: /var/cache/pacman/pkg/cups-1.4.2-4-i686.pkg.tar.xz: Illegal seek success! generating openssl-0.9.8n-1-i686.pkg.tar.xz with openssl-0.9.8m-2_to_0.9.8n-1-i686.delta... xdelta3: seek to 6291456 failed: /var/cache/pacman/pkg/openssl-0.9.8m-2-i686.pkg.tar.xz: Illegal seek success! generating php-suhosin-0.9.30-1-i686.pkg.tar.xz with php-suhosin-0.9.29-1_to_0.9.30-1-i686.delta... success! checking package integrity... (4/4) checking for file conflicts [---------------------------------------------------] 100% (1/4) upgrading libcups [---------------------------------------------------] 100% (2/4) upgrading cups [---------------------------------------------------] 100% (3/4) upgrading openssl [---------------------------------------------------] 100% (4/4) upgrading php-suhosin [---------------------------------------------------] 100%
Are the xdelta errors harmful?
Offline
Thanks for those repos, sabooky.
If we're talking about errors, I get (using xdelta3 3.0y)
generating wine-1.1.41-1-i686.pkg.tar.xz with wine-1.1.40-1_to_1.1.41-1-i686.delta... xdelta3: non-seekable source in decode: XD3_INTERNAL failed.
Last edited by lucke (2010-03-26 12:52:01)
Offline
To sabooky:
I have sent you two emails asking if you would be ok with releasing some of the code you did for the repo. I didn't have any answer.
If you're not willing to disclose your work, that's ok but could you tell me so? I don't know if you ever received the mails I sent.
Thank you and sorry for disrupting the thread.
Offline
@gohu
Sorry, the email account associated with my login here is my spam email account. It didn't used to be, but unfortunately someone decided to cite my email address in their script and since then it basically had to be retired to spam. So the easiest way to get in touch with me is through this thread.
I checked my email and saw your messages, I'll make an effort to post the scripts either on here, or maybe through github this weekend. (no promises, other than I will try)
I'm all for disclosing the work, hell.. it'd be great if someone can improve on it.
@lucke and @ramses October 25th entry mentions some information on non-seekable sources, not sure why its happening though.
ramses, yours seems to be just a warning.
lucke, could you please post the md5sum on both wine-1.1.40-1_to_1.1.41-1-i686.delta and the wine-1.1.40*.pkg.tar.[xg]z file.
I would like to test generating the file using the delta manually.
You could try it yourself, the command is in the code snippit below. The -B flag might be helpful, if we can get the xdelta to behave properly, then it's just a matter of patching the pacman code.
Also.. could you give it a shot with xdelta3 3.0v? (if you need the tarball let me know and I'll post it for you).
Offline
3.0v seems to work.
That code seems to work - the only "problem" is that
xdelta3 -d -vv -R -c -s wine-1.1.40-1-i686.pkg.tar.gz wine-1.1.40-1_to_1.1.41-1-i686.delta | gzip -n > wine-1.1.41-1-i686.pkg.tar.gz
takes 10 seconds for me and
xdelta3 -d -vv -s wine-1.1.40-1-i686.pkg.tar.gz wine-1.1.40-1_to_1.1.41-1-i686.delta wine-1.1.41-1-i686.pkg.tar.xz
takes 91 seconds. Creating xz-compressed files using deltas (i.e. locally) doesn't seem to be a very good idea - unless someone really values their disk space (7 MB of difference between xz and gz for wine). I'm not sure if it's currently possible to control the resultant compression type of packages after applying xdelta3 - I don't quite grok that code - but such a choice should surely be presented to the user.
Offline
3.0v from upstream required patching to deal with xz faster. Is the time difference as great with 3.0y?
@sabooky: any chance you can send the query to the pacman-dev mailing list? If you do not want to subscribe, I can send it for you.
Offline
This time is generally the time it takes to compress a tar file with gz and xz, it seems. xdelta3 itself adds but a few seconds.
I see no performance-related patch in the svn - just a patch adding xz support to w; and I've indeed built v by just changing the version in w's PKGBUILD.
Offline | https://bbs.archlinux.org/viewtopic.php?pid=732034 | CC-MAIN-2016-26 | refinedweb | 4,372 | 70.5 |
I'm writing a zombie survival app, and I'm trying to select all my users marked "alive" where :alive is a boolean.
I was writing a private method in my users controller but can't get the ruby right, does anyone have a pointer?
def get_alive
@holder = (User.map {|user| user})
@user = @holder.each {|i| if i.alive @user << i}
end
thanks
Use a scope to find all alive users.
class User < ActiveRecord::Base scope :alive, where(:alive => true) # ... the rest of your model ... end
Then you can do this:
@alive_users = User.alive
You could just select those users directly if
User is active record:
User.where(:alive => true)
Or filter for just those users:
User.all.filter(&:alive)
you need to give a bit more details what "holder" is supposed to be... and why you are comparing against 'i'
otherwise:
User.where(:alive => true)
it's a good idea to wrap this in a scope as in Sean Hill's answer
You can even use this syntax in where query
User.where(alive: true)
Or use select over array of object. But select is slow
User.all.select{ |user| user.alive == true} | http://www.dlxedu.com/askdetail/3/1aa2e871f9bde0d3236dd1eed87e328e.html | CC-MAIN-2019-04 | refinedweb | 194 | 78.14 |
1.1 Glossary
This document uses the following terms:
binary large object (BLOB): A discrete packet of data that is stored in a database and is treated as a sequence of uninterpreted bytes.
codec: An algorithm that is used to convert media between digital formats, especially between raw media data and a format that is more suitable for a specific purpose. Encoding converts the raw data to a digital format. Decoding reverses the process.
Contact object: A Message object that contains properties pertaining to a contact.
dictionary: A collection of key/value pairs. Each pair consists of a unique key and an associated value. Values in the dictionary are retrieved by providing a key for which the dictionary returns the associated value.
fax message: A fax that a fax server has completely received or transmitted, and archived to the Fax Archive Folder described in [MS-FAX] section 3.1.1.
header: A name-value pair that supplies structured data in an Internet email message or MIME entity.
mailbox: A message store that contains email, calendar items, and other Message objects for a single recipient.
message class: A property that loosely defines the type of a message, contact, or other Personal Information Manager (PIM) object in a mailbox.
Message object: A set of properties that represents an email message, appointment, contact, or other type of personal-information-management object. In addition to its own properties, a Message object contains recipient properties that represent the addressees to which it is addressed, and an attachments table that represents any files and other Message objects that are attached to it.
missed call notification: A Message object that is intended to convey information about a call that was missed. The Message object contains information about the calling party and the time of the call, but does not contain audio content.
Multipurpose Internet Mail Extensions (MIME): A set of extensions that redefines and expands support for various types of content in email messages, as described in [RFC2045], [RFC2046], and [RFC2047].
recipient: An entity that can receive email messages.
rights-managed email message: An email message that specifies permissions that are designed to protect its content from inappropriate access, use, and distribution.
Simple Mail Transfer Protocol (SMTP): A member of the TCP/IP suite of protocols that is used to transport Internet messages, as described in [RFC5321].
special folder: One of a default set of Folder objects that can be used by an implementation to store and retrieve user data objects.
stream: An element of a compound file, as described in [MS-CFB]. A stream contains a sequence of bytes that can be read from or written to by an application, and they can exist only in storages.
Unified Messaging: A set of components and services that enable voice, fax, and email messages to be stored in a user's mailbox and accessed from a variety of devices.
Uniform Resource Locator (URL): A string of characters in a standardized format that identifies a document or resource on the World Wide Web. The format is as specified in [RFC1738].
voice message: A Message object that contains audio content recorded by a calling party.
XML namespace: A collection of names that is used to identify elements, types, and attributes in XML documents identified in a URI reference [RFC3986]. A combination of XML namespace and local name allows XML documents to use elements, types, and attributes that have the same names but come from different sources. For more information, see [XMLNS-2ED].. | https://docs.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxoum/7f02aeef-6cd9-4665-83fc-064d22c0f0bd | CC-MAIN-2021-10 | refinedweb | 582 | 52.19 |
Algorithm To Find the Middle Element of Linked List
1. Take two pointers. Move first pointer by one and second pointer by two.
2. When second pointer reaches at end. First pointer reach at middle.
3. Print the value of first pointer which is the middle element.
Related Linked List Questions
Program to Reverse a Linked List
Program to Insert Node at the end of Linked List
Program to Find the Middle Element of Linked List
#include <stdio.h> struct node{ int data; struct node* next; }; struct node* head; void insert(int data){ struct node* temp = (struct node*)malloc(sizeof(struct node)); temp->data = data; temp->next = head; head = temp; } void middle(){
// Two pointers struct node* slow = head; struct node* fast = head; while(fast!=NULL && fast->next!=NULL){ slow = slow->next; // Move slow pointer by one fast = fast->next->next; // Move fast pointer by two } printf("\nmiddle element is %d",slow->data); } void print(){ struct node* temp = head; while(temp!=NULL){ printf("%d\n",temp->data); temp = temp->next; } } void main(){ head = NULL; insert(2); insert(4); insert(6); insert(7); insert(8); print(); middle(); } | http://www.cprogrammingcode.com/2014/06/write-program-to-find-middle-element-of.html | CC-MAIN-2021-17 | refinedweb | 185 | 72.56 |
Extension Overview
Bazel extensions are files ending in
.bzl. Use the
load statement to
import a symbol from an extension.
load("//build_tools/rules:maprule.bzl", "maprule")
This code will load the file
build_tools/rules/maprule.bzl and add the
maprule symbol to the environment. This can be used to load new rules,
functions or constants (e.g. a string, a list, etc.). Multiple symbols can be
imported by using additional arguments to the call to
load. Arguments must
be string literals (no variable) and
load statements must appear at
top-level, i.e. they cannot be in a function body.
The first argument of
load is a label
identifying a
.bzl file. If it is a relative label, it is resolved with
respect to the package (not directory) containing the current
.bzl file.
Relative labels in
load statements should use a leading
:.
load also supports aliases, i.e. you can assign different names to the
imported symbols.
load("//build_tools/rules:maprule.bzl", maprule_alias = "maprule")
You can define multiple aliases within one
load statement. Moreover, the
argument list can contain both aliases and regular symbol names. The following
example is perfectly legal (please note when to use quotation marks).
load(":my_rules.bzl", "some_rule", nice_alias = "some_other_rule")
In a
.bzl file, symbols starting with
_ are not exported and cannot be loaded
from another file. Visibility doesn’t affect loading (yet): you don’t need to
use
exports_files to make a
.bzl file visible.. | https://docs.bazel.build/versions/0.19.2/skylark/concepts.html | CC-MAIN-2020-45 | refinedweb | 241 | 60.72 |
How to calculate all lcm prime factors of n integers?
I have n integers stored in array a, for example a [0], a [1], ....., a [n-1], where each is
a[i] <= 10^12
and
n <100
. Now I need to find all the LCM prime factors of these n integers, that is, LCM of {a [0], a [1], ....., a [n-1]}
I have a method, but I need a more efficient one.
My method:
First calculate all the prime numbers up to 10^6 using sieve of Eratosthenes. For each a[i] bool check_if_prime=1; For all prime <= sqrt(a[i]) if a[i] % prime[i] == 0 { store prime[i] check_if_prime=0 } if check_if_prime store a[i] // a[i] is prime since it has no prime factor <= sqrt(n) Print all the stored prime[i]'s
Is there a better approach to this problem?
I am posting a link to the problem:
Link to my code:
Decision:
As Daniel Fischer suggested, my code needed some optimizations like a faster sieve and some minor modifications. After making all these changes, I can fix the problem. This is my accepted code on SPOJ which took 1.05 seconds:
using namespace std; bitset <max+1> p; int size; int prime[79000]; void sieve(){ size=0; long long i,j; p.set(0,1); p.set(1,1); prime[size++]=2; for(i=3;i<max+1;i=i+2){ if(!p.test(i)){ prime[size++]=i; for(j=i;j*i<max+1;j++){ p.set(j*i,1); } } } } int main() { sieve(); int t; scanf("%d", &t); for (int w = 0; w < t; w++){ int n; scanf("%d", &n); long long a[n]; for (int i = 0; i < n; i++) scanf("%lld", &a[i]); map < long long, int > m; map < long long, int > ::iterator it; for (int i = 0; i < n; i++){ long long num = a[i]; long long pp; for (int j = 0; (j < size) && ((pp = prime[j]) * pp <= num); j++){ int c = 0; for ( ; !(num % pp); num /= pp) c = 1; if (c) m[pp] = 1; } if ((num > 0) && (num != 1)){ m[num] = 1; } } printf("Case #%d: %d\n", w + 1, m.size()); for (it = m.begin(); it != m.end(); it++){ printf("%lld\n", (*it).first); } } return 0; }
In case anyone can do it better or with a faster method, please let me know.
source to share
With these constraints, a few not too large numbers, the best way to find the primary factorization of their least common multiple is, in fact, factorization of each number. Since there are only 78498 primes below 10 6, trial division will be fast enough (unless you really are desperate for the last performance hit) and sifting primes down to 10 6 is also a matter of a few milliseconds.
If speed is of the utmost importance, then a combined trial division approach and a deterministic simple Miller-Rabin type test using a factorization method like Rollall Pollard's algorithm or elliptic curve factorization method is probably slightly faster (but the numbers are so small that the difference won't be big and you need a number type with more than 64 bits to make the strength test and factorization fast).
In factorization, you must, of course, remove the main factors as they are found
if (a[i] % prime[k] == 0) { int exponent = 0; do { a[i] /= prime[k]; ++exponent; }while(a[i] % prime[k] == 0); // store prime[k] and exponent // recalculate bound for factorisation }
to reduce the limit on which prime numbers need to be checked.
Your main problem, as far as I can see, is that your sieve is too slow and uses up too much space (which is partly due to its slowness). Use the sieve bit to improve cache locality, remove even numbers from the sieve, and stop checking to see if multiples of the square root should be flushed from
max
. And you are allocating too much space for a simple array.
for(int j=0;(prime[j]*prime[j] <= num) && (j<size);j++){
You must check
j < size
before accessing
prime[j]
.
while(num%prime[j]==0){ c=1; num /= prime[j]; m[prime[j]]=1; }
Don't install
m[prime[j]]
multiple times. Even if it is
std::map
pretty fast, it is slower than installing it just once.
source to share
FWIW, in response to an initial request for a faster way to get all prime numbers up to a million, that's a quick way to much .
For this, a window sieve from Eratosthenes on wheels is used with a wheel size of 30 and a window size set to the square root of the upper search limit (1000 for searches up to 1,000,000).
Since I am not fluent in C ++, I have coded it in C # assuming it should be easily convertible to C ++. However, even in C #, it can enumerate all primes up to 1,000,000 in 10 milliseconds. Even generating all primes up to a billion takes only 5.5 seconds, and I would imagine it would be even faster in C ++.
public class EnumeratePrimes { /// <summary> /// Determines all of the Primes less than or equal to MaxPrime and /// returns then, in order, in a 32bit integer array. /// </summary> /// <param name="MaxPrime">The hishest prime value allowed in the list</param> /// <returns>An array of 32bit integer primes, from least(2) to greatest.</returns> public static int[] Array32(int MaxPrime) { /* First, check for minimal/degenerate cases */ if (MaxPrime <= 30) return LT30_32_(MaxPrime); //Make a copy of MaxPrime as a double, for convenience double dMax = (double)MaxPrime; /* Get the first number not less than SQRT(MaxPrime) */ int root = (int)Math.Sqrt(dMax); //Make sure that its really not less than the Square Root if ((root * root) < MaxPrime) root++; /* Get all of the primes <= SQRT(MaxPrime) */ int[] rootPrimes = Array32(root); int rootPrimeCount = rootPrimes.Length; int[] primesNext = new int[rootPrimeCount]; /* Make our working list of primes, pre-allocated with more than enough space */ List<int> primes = new List<int>((int)Primes.MaxCount(MaxPrime)); //use our root primes as our starting list primes.AddRange(rootPrimes); /* Get the wheel */ int[] wheel = Wheel30_Spokes32(); /* Setup our Window frames, starting at root+1 */ bool[] IsComposite; // = new bool[root]; int frameBase = root + 1; int frameMax = frameBase + root; //Pre-set the next value for all root primes for (int i = WheelPrimesCount; i < rootPrimeCount; i++) { int p = rootPrimes[i]; int q = frameBase / p; if ((p * q) == frameBase) { primesNext[i] = frameBase; } else { primesNext[i] = (p * (q + 1)); } } /* sieve each window-frame up to MaxPrime */ while (frameBase < MaxPrime) { //Reset the Composite marks for this frame IsComposite = new bool[root]; /* Sieve each non-wheel prime against it */ for (int i = WheelPrimesCount; i < rootPrimeCount; i++) { // get the next root-prime int p = rootPrimes[i]; int k = primesNext[i] - frameBase; // step through all of its multiples in the current window while (k < root) // was (k < frameBase) ?? // { IsComposite[k] = true; // mark its multiple as composite k += p; // step to the next multiple } // save its next multiple for the next window primesNext[i] = k + frameBase; } /* Now roll the wheel across this window checking the spokes for primality */ int wheelBase = (int)(frameBase / 30) * 30; while (wheelBase < frameMax) { // for each spoke in the wheel for (int i = 0; i < wheel.Length; i++) { if (((wheelBase + wheel[i] - frameBase) >= 0) && (wheelBase + wheel[i] < frameMax)) { // if its not composite if (!IsComposite[wheelBase + wheel[i] - frameBase]) { // then its a prime, so add it to the list primes.Add(wheelBase + wheel[i]); } // // either way, clear the flag // IsComposite[wheelBase + wheel[i] - frameBase] = false; } } // roll the wheel forward wheelBase += 30; } // set the next frame frameBase = frameMax; frameMax += root; } /* truncate and return the primes list as an array */ primes.TrimExcess(); return primes.ToArray(); } // return list of primes <= 30 internal static int[] LT30_32_(int MaxPrime) { // As it happens, for Wheel-30, the non-Wheel primes are also //the spoke indexes, except for "1": const int maxCount = 10; int[] primes = new int[maxCount] {2, 3, 5, 7, 11, 13, 17, 19, 23, 29 }; // figure out how long the actual array must be int count = 0; while ((count <= maxCount) && (primes[count] < MaxPrime)) { count++; } // truncte the array down to that size primes = (new List<int>(primes)).GetRange(0, count).ToArray(); return primes; } //(IE: primes < 30, excluding {2,3,5}.) /// <summary> /// Builds and returns an array of the spokes(indexes) of our "Wheel". /// </summary> /// <remarks> /// A Wheel is a concept/structure to make prime sieving faster. A Wheel /// is sized as some multiple of the first three primes (2*3*5=30), and /// then exploits the fact that any subsequent primes MOD the wheel size /// must always fall on the "Spokes", the modulo remainders that are not /// divisible by 2, 3 or 5. As there are only 8 spokes in a Wheel-30, this /// reduces the candidate numbers to check to 8/30 (4/15) or ~27%. /// </remarks> internal static int[] Wheel30_Spokes32() {return new int[8] {1,7,11,13,17,19,23,29}; } // Return the primes used to build a Wheel-30 internal static int[] Wheel30_Primes32() { return new int[3] { 2, 3, 5 }; } // the number of primes already incorporated into the wheel internal const int WheelPrimesCount = 3; } /// <summary> /// provides useful methods and values for working with primes and factoring /// </summary> public class Primes { /// <summary> /// Estimates PI(X), the number of primes less than or equal to X, /// in a way that is never less than the actual number (P. Dusart, 1999) /// </summary> /// <param name="X">the upper limit of primes to count in the estimate</param> /// <returns>an estimate of the number of primes between 1 and X.</returns> public static long MaxCount(long X) { double xd = (double)X; return (long)((xd / Math.Log(xd)) * (1.0 + (1.2762 / Math.Log(xd)))); } }
source to share
There seems to be some useful algorithms at
In particular
seems to be appropriate.
It turns nested loops "from the inside" and works on all numbers at the same time, one at a time.
Since it uses one stroke at a time, you can find the next prime as needed, avoiding generating 10 ^ 6 primes before starting. Since each number is reduced by its simple coefficients, the maximum number needed to test the numbers can be reduced, so it requires even less work.
Edit: It also makes the ending unambiguous and easy to test because the number converges to one when all of its factors have been found. In fact, when all numbers come down to one, it might terminate, although I haven't used this property in my code.
Edit: I read the problem, the algorithm at resolves it directly.
SWAG: There are 78,498 primes below 10 ^ 6 () In the worst case, there are 100 numbers to be checked for 78,498 primes, = 7,849,800 ' mod 'operations
No number can be successfully handled by a simple (one mod and one division) more than log2 (10 ^ 12) = 43 mods and divides, so 4300 divisions and 4300 mods, so simple tests prevail. To keep it simple, call it 8,000,000 whole divisions and mods. It should generate prime numbers, but as already stated by Daniel Fischer, it's fast. The rest is accounting.
So, on a modern processor, I would WAG around 1,000,000,000 divisions or mods per second, so the runtime is around 10ms x 2?
Edit: I used the algorithm
No cleverness, exactly as explained there.
I poorly rated my estimate at about 10x, but still only 20% of the maximum allowed execution time.
Performance (with some stamp to confirm results)
real 0m0.074s user 0m0.062s sys 0m0.004s
for 100 numbers:
999979, 999983, 999979, 999983, 999979, 999983, 999979, 999983, 999979, 999983,
10 times to ensure that almost all primes should be tested as this appears to be the main computation.
and also with the same print volume, but the value is almost 10 ^ 12
real 0m0.207s user 0m0.196s sys 0m0.005s
for 100 of
999962000357L, // ((long)999979L)*((long)999983L)
gcc - version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5666) (point 3) Copyright (C) 2007 Free Software Foundation, Inc. This is free software; see source for copy conditions. There is no guarantee here; even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Model Name: MacBook Pro Processor Name: Intel Core 2 Duo Processor Speed: 2.16GHz
Summary. It works well, and the startup time is about 20% of the allowed maximum on a relatively old processor, which is comparable to Daniel Fischer's implementation.
Q: I am a new member here, so it seems a little harsh on my answer when:
a. it appears to be accurate, complete, and satisfies all criteria, and
b. I wrote the code, tested it and presented the results.
What did I do wrong? How can I get feedback to improve the situation?
source to share
result := [] for f in <primes >= 2>: if (any a[i] % f == 0): result = f:result for i in [0..n-1]: while (a[i] % f == 0): a[i] /= f if (all a[i] == 1): break
Note: This only gives a list of simple LCM ratios, not the actual LCM value (i.e. it does not calculate the scores) and I believe all of this is necessary.
source to share | https://daily-blog.netlify.app/questions/1893136/index.html | CC-MAIN-2021-43 | refinedweb | 2,209 | 67.18 |
Let's begin with a simple C++ program that displays a message.
The following code uses the C++ cout (pronounced "see-out") to produce character output.
The source code comments lines begin with
//, and the compiler
ignores them.
C++ is case sensitive. It discriminates between uppercase characters and lowercase characters.
The cpp filename extension is a common way to indicate a C++ program.
#include <iostream> // a PREPROCESSOR directive int main() // function header { // start of function body using namespace std; // make definitions visible cout << "this is a test."; // message cout << endl; // start a new line cout << "hi!" << endl; // more output return 0; // terminate main() }
The code above generates the following result.
To make the window stay open until you strike a key by adding the following line of code before the return statement:
cin.get();
If you're used to programming in C, you would not know cout but you do know the printf() function.
C++ can use printf(), scanf(), and all the other standard C input and output functions, if that you include the usual C stdio.h file.
You construct C++ programs from building blocks called functions.
Typically, you organize a program into major tasks and then design separate functions to handle those tasks.
The example shown above is simple enough to consist of a single function named main().
The main() function is a good place to start because some of the features that precede main(), such as the preprocessor directive.
The sample program has the following fundamental structure:
int main() { statements return 0; }
The final statement in main(), called a return statement, terminates the function.
The code above has the following elements:++, leaving the parentheses empty is the same as using void in the parentheses.
Some programmers use this header and omit the return statement:
void main() | http://www.java2s.com/Tutorials/C/Cpp_Tutorial/index.htm | CC-MAIN-2021-17 | refinedweb | 299 | 64.2 |
Hi all,
I am new to pytorch and also new to autograd. My question is : Do I need to compute the partial derivatives for my functions parameters?For example: My new layer want to compute a 1-d gaussian probability density value, the function is,f(x)=a * exp^((x-b)^2 / c)) ,where a,b,c are the parameters need to be updated. I think all of these operations are basic operation and the output is a scalar. Do I still need to write backward code like we do in Caffe? Or May I just define a new module only with forward function then pytorch will compute the parameters' derivatives automatically for me?
f(x)=a * exp^((x-b)^2 / c))
a,b,c
Sure, this will be handled for you. For example:
import torch.nn as nn
from torch.autograd import Variable
class Gaussian(nn.Module):
def __init__(self):
self.a = nn.Parameter(torch.zeros(1))
self.b = nn.Parameter(torch.zeros(1))
self.c = nn.Parameter(torch.zeros(1))
def forward(self, x):
# unfortunately we don't have automatic broadcasting yet
a = self.a.expand_as(x)
b = self.b.expand_as(x)
c = self.c.expand_as(x)
return a * torch.exp((x - b)^2 / c)
module = Gaussian()
x = Variable(torch.randn(20))
out = module(x)
loss = loss_fn(out)
loss.backward()
# Now module.a.grad should be non-zero.
Thank you so much for your kind help! I still would like to ask few questions: (1) should we add super(Gaussian, self).__init__() after def __init__(self):? (2) After we add this new module to our network and start to train it, Do I only need to use
super(Gaussian, self).__init__()
def __init__(self):
optimizer = optim.SGD(net.parameters(), lr = 0.01)optimizer.zero_grad() output = net(input)loss = criterion(output, target)loss.backward()optimizer.step()
to update all the parameters?
a
b
c
nn.Parameter
.parameters()
Thank you so much for your help!
Now we have defined the Gaussian layer, I want to know how to use it in a network of several layers. For example,
class MyNet(nn.Module):
def __init__(self):
super(MyNet, self).__init__()
self.fc1 = nn.Linear(100, 20)
self.gauss = Gaussian()
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.gauss(x)
net = MyNet()
Is the above code correct? Or do we have to set up other things in order to use the newly-defined layer in other network?
the above is correct.
Can I define my own layer with a forward/backward function or I will have to define the layer as a Function so that I can define the backward function too?
You have no need to define the backward function yourself as long as the functions you are using to calculate the loss function is in PyTorch scope. You should definitely define a forward function just as shown by @jdhao.
By the way if we implement a layer and we have scipy or numpy operation inside of it Like in here, can it be run and accelerate on GPU?, or the layer just run on CPU because our numpy and scipy can not run on GPU?
you can add .cuda() to both the module and the input to see if the given example can run without any error.
.cuda()
input
@jdhao Yes it works I have tried it before. However I just want all operation works on GPU. In that example I think (please correct me if I am wrong) it still can run but the operation which run on GPU is just tensor operation like in here or some basic pytorch function. So when we use .cuda() it will have big communication cost between GPU and CPU. For example when I am trying manual convolution using python loop operation to pytorch tensor it needs so much time, compare with ready made pytorch convolution. I just want to know what I miss in extending pytorch, whether all operation even when using scipy declared in pytorch can really work on GPU or scipy process still works on CPU but the tensor on GPU which give big communication cost between CPU and GPU.
scipy
Most of the time we do not need to extend PyTorch using numpy. As long as you use the builtin method of Variable, you can only write forward method and backward gradient computation is handled by autograd. So using a composition of builtin Variable method to achieve what you want is more time-saving.
@herleeyandi, the portions of your code which use scipy will not be GPU-accelerated. Only operations on CUDA Torch tensors and Variables will be GPU-accelerated.
@colesbury I see, so how about if I want to create some functions which is GPU accelerated?, should I use CFFI which coded using CUDA C++ ?, Can you give me more hint
Yes, you can do that. Writing new CUDA kernels usually requires a lot of effort. If you can express your layer in terms of existing Tensor operations, then that’s usually a better way to get started. If you can’t do that, then you might have to write new kernels. | https://discuss.pytorch.org/t/how-to-define-a-new-layer-with-autograd/351 | CC-MAIN-2017-47 | refinedweb | 858 | 67.76 |
Hey guys I've just been playing around with allegro 5, and playing with examples from codingmadeeasy, this is an example from him, and It works fine when he compiles it. He is however using vs and i'm using codeblocks. The code is
[code]// C++ ALLEGRO 5 MADE EASY TUTORIAL 5 - FONT & TEXT// CODINGMADEEASY
#include <allegro5/allegro.h>#include <allegro5/allegro_native_dialog.h>#include <allegro5/allegro_ttf.h>#include <allegro5/allegro_font.h>
#define ScreenWidth 800#define ScreenHeight 600
int main(){ ALLEGRO_DISPLAY *display;
if(!al_init()) { al_show_native_message_box(NULL, NULL, "Error", "Could not initialize Allegro 5", NULL, ALLEGRO_MESSAGEBOX_ERROR); return -1; } display = al_create_display(ScreenWidth, ScreenHeight);
if(!display) { al_show_native_message_box(NULL, NULL, "Error", "Could not create Allegro 5 display", NULL, ALLEGRO_MESSAGEBOX_ERROR); return -1; }
// You generally want to do this after you check to see if the display was created. If the display wasn't created then there's // no point in calling this function al_set_new_display_flags(ALLEGRO_NOFRAME); al_set_window_position(display, 200, 100); al_set_window_title(display, "CodingMadeEasy");
al_init_font_addon(); al_init_ttf_addon();
ALLEGRO_FONT *font = al_load_font("font.ttf", 36, NULL); al_draw_text(font, al_map_rgb(44, 117, 255), ScreenWidth / 2, ScreenHeight / 2, ALLEGRO_ALIGN_CENTRE, "");
al_flip_display(); al_rest(10.0);
al_destroy_font(font); al_destroy_display(display);
return 0;}[/code]
The error given is ..
Assertion failed C:\ ... File:allegro-5.0.x/addons/font/text.cline 73
expression : Font
So my original though was maybe I needed to use my own font, So I downloaded it from urban fonts, placed it in project folder and set it up to use it, the font name is molten and was set accordingly and doesn't give the error when it pops up...but after a few seconds it crashes..
and advice/help appreciated.thanks for reading.
That program will shut down after it has been open for 10 seconds.
Also this site uses SGML/HTML/XML style tags. Like: <code>
Use <code>, not [code] to show code in your post.
If I'm not wrong, al_load_font() (Actually, al_load_ttf_font(), since you're using a truetype font) returns NULL on error.Check if variable font isn't null after loading. If it is, a problem happened while loading it. You can try changing the folder it is placed. Also, try checking if the font's name(and extension) is exactly the same of what you're trying to load.
Thanks for the quick response, I double checked to make sure I had the extension and name correct, both are
As far as checking the value of font, I tried using cout to print its value to the console, however nothing comes out
for that you can just test it against NULL.
ALLEGRO_FONT *font = al_load_font("font.ttf", 36, NULL);
if(font == NULL) return ERR_CANT_LOAD_FONT // fictitious error constant. You might want to return 1 or any other non-zero value
alternatively,
ALLEGRO_FONT *font = al_load_font("font.ttf", 36, NULL);
if(!font) return ERR_CANT_LOAD_FONT // fictitious error constant. You might want to return 1 or any other non-zero value
doesn't give the error when it pops up
You mean the display or the text? I mean, is the text showing, or there's only a black window?
Sorry, to clarify what I meant by nothing pops up, is that there is no text.
The window is white though, and I have it set so a console will appear for purposes of debugging, but more specifically I meant that it won't print anything to console, its entirely blank when the program crashes.
Thanks!
If you have a console, you should be able to see
#include <stdio.h> //for fprintf()
#include <stdlib.h> //for exit()
if(font == NULL)
{
fprintf(stderr,"Can't load font\n");
exit(1);
}
That said, if it can't load the font it might be the compiler changing the directory, you can get around this with al_get_standard_path(ALLEGRO_EXENAME for the reply.
I added in this.
But no change is seen. It still doesn't print to console and crashes
Thanks again for reply
Two things:
1. You need to check the return result of al_load_font() IMMEDIATELY after calling it. Currently, you don't check the return until after your program has run its course.
2. al_set_new_display_flags() has to be called before calling al_create_display(). This is why the command is called "new display flags", since it only affects new displays created afterwards.
--- Kris Asick (Gemini)---
I can't believe I didn't catch that I even put it after it was destroyed, that was stupid of me. I have rectified this so my code is now
And what happened?
You won't see fprintf on Windows unless you build a special way. Try using al_show_native_message_box instead.
I used a messagebox instead of printf or cout, however it doesn't load. It comes right after font is declared and set.
You could try passing an absolute path to your font. If that fails, you probably have a faulty installation of Alleg
Something else I noticed: al_get_standard_path() doesn't actually change or set path information. It creates and returns a pointer to an ALLEGRO_PATH object. This means you will have a memory leak if you don't put the results of the call into an ALLEGRO_PATH pointer, plus if you actually want to use an ALLEGRO_PATH object to properly obtain a file location, you need to use combinations of al_set_path_filename() and al_path_cstr(). Here's the function I wrote to simplify loading textures from a textures folder off of my main game folder:
ALLEGRO_BITMAP *VZ_LoadBitmap (const char *filepath, const char *filename, int bitmap_flags)
{
// This is merely a function to simplify bitmap loading. Return values are identical to al_load_bitmap()
ALLEGRO_BITMAP *tempbmp;
al_destroy_path(temppath); temppath = al_clone_path(gamepath); al_append_path_component(temppath,filepath); al_set_path_filename(temppath,filename);
al_set_new_bitmap_flags(bitmap_flags);
tempbmp = al_load_bitmap(al_path_cstr(temppath,ALLEGRO_NATIVE_PATH_SEP));
return tempbmp;
}
Note that "temppath" and "gamepath" are defined as global pointers to ALLEGRO_PATH objects and that gamepath has been set at the start of the program using al_get_standard_path(). Also, the first al_destroy_path(temppath) call only works because I specifically set all pointers I use to NULL at the start of my programs, otherwise this would crash everything.
It also may look like my call to al_path_cstr() is a memory leak, but the pointer is actually tracked by the ALLEGRO_PATH object so I don't have to worry about it.
Thank you for the detailed post! Could I ask for a little more detail on how to use it though? I'm in a hurry so forgive my quick post, but you said linking to an allegro program file, which I don't understand, I installed it by drag and drop the bin include and lib folders.
I'm sorry for the hasty reply but College + work = pain in the ass >_<
Thanks again
Uhh... slow down and actually have time to read stuff before you respond again. I never said anything like that... *reads up* ...doesn't look like anyone else did either.
Sorry about the hasty reply. But thanks a lot for your excerpt.
I'm pretty new to allegro and I won't lie that looks pretty scary..But id be a horrible programmer if I let everything scare me away
So from what youve told me I'm wanting to do something like this..
string *path = al_path_cstr("c:\...";
//then use
al_get_standard_path(*path);
Thanks again for reading
Uh... No.
The following is essentially what you need to do to work with filenames and paths with Allegro's built-in functions. Use the Allegro Manual to get information on the usage of these functions:
1. Create a pointer to an ALLEGRO_PATH object.2. Use al_get_standard_path() to assign a path to your ALLEGRO_PATH object.3. Use al_set_path_filename() to set the name of the file you want to load in your ALLEGRO_PATH object.4. When you're ready to load your file, use al_path_cstr() to acquire a string pointer for use directly in a loading command, like al_load_font(). The string returned is stored and tracked by the ALLEGRO_PATH object that generated it, so you can treat the result like a constant instead of as an object that needs to be destroyed.5. When you're done with your ALLEGRO_PATH object, call al_destroy_path() on it.
I'm having trouble with step 3 of that. so far I have this<code> ALLEGRO_PATH *myPath = al_get_standard_path(ALLEGRO_EXENAME_PATH); al_set_path_filename(*myPath, const char *filename = "drakon.ttf");</code.>
so in the manual it gives this
void al_set_path_filename(ALLEGRO_PATH *path, const char *filename)
I'm at a loss here, I've already created my allegro path object, and declared my al_set_path_filename, so why created a function to set them again?
Thanks
I think you might need to study up on C/C++ a bit more. Your code there shows a lack of basic C/C++" --
More-so a lack of pointer knowledge. *'s indicate that you either want to create a pointer to a variable, that you want to grab the information from a pointed variable, or that a function wants a pointer as one of its arguements.
Like:
int *PointerToInt;
ThisFunctionWantsAPointer(PointerToInt);
ThisFunctionWantsAnInt(*PointerToInt);
Or, you can grab the pointer of a non-pointer variable using an ampersand, like so:
int MyInt;
ThisFunctionWantsAnInt(MyInt);
ThisFunctionWantsAPointer(&MyInt);
I'm fine with pointers, but this explanation really confused me.
To make it simple, I'll just state it as is:Pointer is an integer that contains an absolute address to some place in a memory, no more than that. Type of a pointer only helps to know how much data is supposed to be there where pointer points to.
Being serious is stupid, I'm done with it.
More-so a lack of pointer knowledge.
You might want to check his code again:
ALLEGRO_PATH *myPath = al_get_standard_path(ALLEGRO_EXENAME_PATH);
al_set_path_filename(*myPath, const char *filename = "drakon.ttf");
Particularly the second argument to al_set_path_filename. | https://www.allegro.cc/forums/thread/611310/968741 | CC-MAIN-2018-39 | refinedweb | 1,608 | 54.93 |
On Sun, 06 Jan 2019 20:19:59 +0000, Rubn wrote: > You can declare functions inside of functions in D. You weren't forward > declare grow() in the module namespace, so much as you were forward > declaring a new function grow.
Unfortunately, you can't do forward declarations for nested functions. If you could, that would be handy for mutual recursion: void main() { int a(int i); int b(int i) { if (i > 0) return a(i - 1); return abs(i); } int a(int i) { if (i % 2 == 0) return b(i - 2); return b(i - 1); } writeln(a(12)); } Unfortunately, Error: declaration a is already defined And omitting the forward declaration gets you: Error: undefined identifier a | https://www.mail-archive.com/digitalmars-d-learn@puremagic.com/msg92695.html | CC-MAIN-2021-21 | refinedweb | 118 | 53.24 |
This language has been replaced by version 0.95 - see the EARL Overview for more details.
EARL 0.9 was the first stable version of the language, a kind of beta-version for simple hacking. It was first suggested as a stable version by Daniel Dardailler in an email to w3c-wai-er-ig.
An RDF schema for EARL 0.9 is available (also in Notation3). The persistent namespace for this version of the language is:-
Please note the hash (#) on the end (see also: Cool URIs Don't Change).
We have produced three examples of EARL 0.9 (in Notation3). Also available are some images of the data model.$Id: v0.9.html,v 1.2 2001/05/27 00:52:27 spalmer Exp $ | http://www.w3.org/2001/03/earl/v0.9.html | CC-MAIN-2015-35 | refinedweb | 125 | 78.55 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
"try" Code and "catch" Exceptions4:47 with Jeremy McLain
We'll learn how to handle exceptions using the *try / catch* construct.
Clear, compile, and run:
clear && mcs Program.cs && mono Program.exe
Start REPL:
csharp
Quit REPL:
quit
Code:
using System; namespace Treehouse.FitnessFrog { class Program { static void Main() { int runningTotal = 0; bool keepGoing = true; while(keepGoing) { // Prompt user for minutes exercised Console.Write("Enter how many minutes you exercised or type \"quit\" to exit: "); string entry = Console.ReadLine(); if(entry == "quit") { keepGoing = false; } else { try { // Add minutes exercised to total int minutes = int.Parse(entry);; } catch(FormatException) { Console.WriteLine("That is not valid input."); continue; } // Display total minutes exercised to the screen Console.WriteLine("You've entered " + runningTotal + " minutes."); } // Repeat until user quits } Console.WriteLine("Goodbye"); } } }
- 0:00
So in the last video, we discovered that int.Parse throws an exception
- 0:04
if we give it a string that doesn't contain a number.
- 0:07
Consequently, our entire program crashes.
- 0:09
Not good.
- 0:11
One way we can resolve this is,
- 0:12
we could check to see if the value of the entry variable is a number or not.
- 0:17
If it is, we go ahead and call Parse.
- 0:19
If it isn't, we tell the user to try again, just like we did up here.
- 0:24
But how do we determine if entry contains a number or not?
- 0:28
One way to determine that is to just call Parse.
- 0:31
If it throws an exception, then we know the entry doesn't contain a number.
- 0:35
Instead of checking the entry variable before we call Parse,
- 0:38
we can just call Parse right away and then deal with the consequences.
- 0:42
Ever heard the expression, it's easier to ask for forgiveness than for permission?
- 0:47
Exception handling follows that principle.
- 0:49
You see, getting an exception is not necessarily a bad thing.
- 0:54
Exceptions are thrown all the time in programs.
- 0:56
In fact, it's an excellent way to do input validation in C sharp.
- 1:01
Not handling the exception, and allowing your program to crash, is a bad thing.
- 1:06
It's okay to call a method knowing that there is a chance it will
- 1:09
throw an exception, so long as you're prepared to deal with it.
- 1:12
So how do we handle an exception that's thrown by a method?
- 1:16
For that, we need a new language construct called the try catch.
- 1:21
I'll write out the basic structure here.
- 1:25
We place the code that might throw the exception here,
- 1:28
between these two curly braces.
- 1:30
In these parenthesis, we put the name of the exception that we're expecting.
- 1:34
We got that from the error message.
- 1:37
We don't need to type system because we have a using directive for it.
- 1:41
We write the code that should be run when this exception happens,
- 1:44
down here between these two curly braces.
- 1:46
In our case,
- 1:47
we want to inform the user that what they just entered didn't make any sense.
- 1:52
When doing input validation, you should try your best to tell the user why
- 1:56
the input was incorrect and how to fix it.
- 1:58
Let's just say, that is not valid input.
- 2:02
Then we'll put another continue statement here, so
- 2:04
it'll go back to the beginning of the loop and
- 2:06
print this line up here, which reminds the user about their options.
- 2:11
Now, we've introduced the compiler error.
- 2:13
You see, variables that are declared within curly braces can
- 2:17
only live inside those curly braces.
- 2:20
So we can only use it inside the curly brace, in which it's declared.
- 2:24
This is called the variable's scope.
- 2:26
For example, we can use running total anywhere in the Main method,
- 2:30
below line nine, because it's declared in line nine inside the curly braces that
- 2:34
form the body of Main.
- 2:35
Entry is declared inside the while loop, so
- 2:38
it can only be accessed inside the while loop.
- 2:42
When the program gets to the bottom of the while loop, the entry variable is
- 2:45
destroyed and a new one is declared back up here once the program gets there again.
- 2:50
Minutes is declared inside this tri block, so
- 2:53
it can only be accessed between these two curly braces.
- 2:56
As soon as we get down here to this curly brace,
- 2:59
then poof, the minutes variable no longer exists.
- 3:03
If we tried to compile this code right now, we'd get a compiler error.
- 3:06
Because we're using minutes outside the curly braces down here on line 34.
- 3:11
Let's see what this error looks like.
- 3:18
Oh, forgot to save.
- 3:28
See? It says on lines 34, 36, 39, 43, 47 and
- 3:33
57, the name minutes does not exist in the current context.
- 3:39
So what we need to do is move this code
- 3:41
into the tri block where minutes does exist.
- 3:46
This actually makes a lot of sense, because we have no use for
- 3:49
all this code unless minutes has a number in it.
- 3:53
And the only way that minutes has a number in it is if the Parse method
- 3:56
completes successfully.
- 3:58
Now, this will compile.
- 4:02
See, no compiler errors.
- 4:05
Variable scope can definitely be tricky to understand at first.
- 4:09
But, we'll get lots of practice working with variables.
- 4:12
Always check both where your variables are declared and used in your code.
- 4:17
Now let's run the program again and see what we get.
- 4:22
Our program still accepts numbers, but if we type in cheese here,
- 4:25
we see the message that we created and are then prompted again.
- 4:30
Being able to gracefully handle any type of input from the user
- 4:33
is what makes a program stable and robust against crashes.
- 4:37
Often times, this means dealing with exceptions in a well thought out way.
- 4:42
I'll go a lot more into creating, throwing and handling exceptions in later courses. | https://teamtreehouse.com/library/try-code-and-catch-exceptions | CC-MAIN-2018-22 | refinedweb | 1,139 | 82.34 |
KDEUI
#include <kdatetimewidget.h>
Detailed Description
A combination of a date and a time selection widget.
This widget can be used to display or allow user selection of date and time.
- See Also
- KDateWidget
Definition at line 39 of file kdatetimewidget.h.
Constructor & Destructor Documentation
Constructs a date and time selection widget.
Definition at line 37 of file kdatetimewidget.cpp.
Constructs a date and time selection widget with the initial date and time set to
datetime.
Definition at line 44 of file kdatetimewidget.cpp.
Destructs the date and time selection widget.
Definition at line 55 of file kdatetimewidget.cpp.
Member Function Documentation
Returns the currently selected date and time.
Changes the selected date and time to
datetime.
Definition at line 76 of file kdatetimewidget.cpp.
Emitted whenever the date or time of the widget is changed, either with setDateTime() or via user selection.
Property Documentation
Definition at line 42 of file kdatetimewidget.h.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2013 The KDE developers.
Generated on Thu Aug 22 2013 22:54:48 by doxygen 1.8.2 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | http://api.kde.org/4.10-api/kdelibs-apidocs/kdeui/html/classKDateTimeWidget.html | CC-MAIN-2015-32 | refinedweb | 199 | 52.56 |
Week 10 -
Machine design Output Devices
Assignment
This week I am planning to drive a small servo with a fabkit-board
Micro Servo Motor 9G SG90 from eBay for 1 $.
Datasheet for micro serco motor 9G SG90: Klick here
Make a fabkit board
Think about a Output Device for final project
After some time I suddenly hit the Neopixel LEDs from adafruit and found this system very interesting.
Link to Output Devices
Link to Adafruit Neopixels
Link to Adafruit-NeoPixel-Ring
First I'm milling a fabkit board (based on satshakit) and then decide what would be best for me.
I am very interessted to build the neopixels. Other possibilities would be as already mentioned
a pump or possibly a fan in order to keep the plants in motion and to ensure fresh air supply. I've always wanted to have my own development board like a Raspberry or Arduino. Of course, is a self-styled like the fabkit board or a satshakit even cooler.
Neopixelboardmodul first trys:
Unfortunately I have found that I do not find it in this vertical way not particularly good.
I had problems when designing the pcb. I will (if I use LEDs as output) create smaller modular elements. But to practice a little with eagle it was not a waste of time.
Fabkit-board development:
As mentioned above in this documentation, we decided to use a ATmega-system to better experiment and making process with outputs and inputs. With such a board, we have much more possibilities to experiment:
On the following photo is the eagle exported partlist for the board:
Connected to 5v I have the board already, also I have measured it with the multimeter and have no errors can fix.
Flash the ATmega with a example programm
To check the function of the board I have first changed the makefile and the main.c file from the embedded programing week (week 8) to bring the LED blinking.
In the following picture you can see how I connected Fab Fab to the Fabkit board to programm it:
Ok here starts the actual output device doku. After I thought about the neopixel, I have rejected this idea, since I would like to use a different lighting for the final project.
Add a Micro Servo Motor 9G SG90 to the Fabkit
First, I had to think about how I got a suitable libary for the arduinio software, since I noticed that my board was not installed after installing the software (the Atmega328) under Tools -> board.
1. Open the Arduino preference window and add the following URL to the "Additional Board Manager" URL´s list:
2. Now go to "Tools" -> "Board" -> "Board Manager" and look for "Barebones ATmega Chips" and install it (latest version). After you done this, a new section called ATmega Microcontrollers. Now I choose the ATmega328/328P. As Processor I choose "ATmega328" as well and set the clock to 16 MHz (external).
3. Lastly the bootloader needs to be burn. The bootloader is the start program for the chip. Go to "Tools" -> "Burn Bootloader".
The Arduino Bootloader is something like the operating system on the chip. It is required, e.g. An ATMEGA328 IC (eg Arduino UNO) or the older ATMEGA168 IC on an Arduino board.
The Library and instructions I found on
github.com/carlosefr/atmega
Because I am a neewbie in the electronics and programming I have decided to admit a small servomotor with my self-made board:
Micro Servo Motor 9G SG90 from eBay for 1 $.
The following picture I found in the arduino forum on
forum.arduino.cc
To understand the ATmega 328 that was very userfull for me to understand the pin outs.
On the following photo you can see the Fabkit, my FabISP, a breadboard and a small servo
On the following video you can see the servomotor in action. As a sketch I took for the first test a exampel sketch from the Arduino software. For servomotors there are the examples "Knob" and "Sweep". In my example it is the sketch "sweep"
Fabkit V 1 Servo test:
Making another Fabkit (my Fabkit V.2)
After the first version of my fabkits has unfortunately broken while I have experimented with it, I may have used the wrong pins and destroyed the chip. After every further programming attempt I got the following set in the terminal, but also in the arduino software: initialization failed, rc = -1:
I designed another board and this time an ISP header with on the board-design:
This time I use only 1 Capacitor (100 N) and 2 Resistors (10KΩ and 470Ω)
Patlist exported from eagle:
Schematic in eagle:
Layouting in eagle:
Board Layout from eagle:
Milled and polished pcb:
Lacquered pcb with solder-lacquer:
Place the components with the pick and place machine from LPKF:
Ready equiped pcb-board (without pin-header sv1 and sv2):
Programming the board:
Making a second baord:
To check these steps again in more detail check out my Week 4 - electronics production assignment.
Fabkit V 2 Servo test:
Sketch for the Servo-test:
_93<<
Connect an LCD display
At the end I have connected an LCD display and output the text "Fabacademy 2017 M.Kellner". For this I used the Liquid Chrystal Lib. The datasheet of the lc display of type 1602a can be found here
Sketch for the LC-Display:
_96<<
Postscript/Additions:
For the servo to work you need to connect it to a pin which has pwm enabled or create a code that is able to switch a pin in the frequency required by the servo. Thats why pin 9 is used, the servo library can work on these pins. Explain the pin connections for the servo and the LCD. and what the code does:
A servo was controlled with a selfmade microcontroller. Now I need to explain the pin connections for the servo.
Servo: Construction and connection A servo consists of a motor control unit (1), an electric motor (2), a gearbox (3) and a potentiometer for position determination (4). All components are housed in a robust housing.
On the following sketch you can see the wiring with an Arduino. Here I transform the pins again to my ATmega328 P-AU with help of my transformation-sheet
What is PWM?
The pulse width modulation (PWM) is a modulation type in which a technical variable (eg, electrical voltage) is applied to the pulse width. It Changes between two values. At a constant frequency, the duty cycle of a square pulse is modulated, thats the width of the pulses forming it.
In pulse width modulation, a square-wave signal with a fixed frequency is taken and the width of the respective pulses is varied. It is best explained by a graphic:
The distance between two successive pulses is called the period duration, the width of the active pulse (a) is the pulse width. The average voltage applied to this pin is given by the ratio of a to (a + b). If a is only half as wide as the period duration, then I have only 50% of the signal voltage available on average. This ratio of pulse width to period duration is also referred to as the "duty cycle" (the time in which the pin is "in service"). As a graphic, this looks like this:
This pulse wave modulation can be used to control servos. Here, the small microcontroller of the servo interprets the pulse width as a winch for the servo arm. So you can simply put the three cables from a servo to plus and ground and the signal cable to a PWM pin and the servo rotates between 0 and 180 degrees depending on the value we write with myservo.Write.
To control the servo with my fabkit, it is best to use the servo library Servo.h. It is already included in the Arduino software. The library is integrated at the start of the program:
#include < Servo.h >
A servo object must then be created:
Servo myservo;
In the setup method, the servo is commanded:
myservo.attach (9);
initialized. The number in the brackets indicates the digital pin of the Arduino to which the servo is connected. The servo can now be very easy from the loop method with:
myservo.write(Position);
To a certain angle. The angle (0 - 180 °) is indicated in brackets.
The setup() function is called when a sketch starts. Use it to initialize variables, pin modes, start using libraries, etc. The setup function will only run once, after each powerup or reset of the board.
void setup()
After creating a setup() function, which initializes and sets the initial values, the loop() function loops consecutively, allowing your program to change and respond. Use it to actively control the board.
void loop()
Here is the Sketch I was using for my fabkit (its an example sketch from the Arduiono ide called sweep):
#include < Servo.h > // load the servo library of the arduino ide Servo myservo; // create servo object to control a servo int pos = 0; // variable for the servo position void setup() { myservo.attach(9); // attaches the servo on pin 13/PWM B1/9 to the servo object. That is pin 5 on SV2 on my Fabkit. } void loop() { for (pos = 0; pos <= 180; pos += 1) { // The Servo can go_113<<
Explain the lcd wiring and code:
A lcd was controlled with a selfmade microcontroller. Now I need to explain the pin connections for the servo:
Again I transform all the Arduino pins to my selfmade fabkit with the illustration Above:
The rotary control is used to adjust the contrast of the LCD. The LED backlight is supplied with 5V as shown in the sketch. Since the label on the LCD would not be recognized in the sketch, it is not shown.
In the program, the library, which by the way is delivered with the Arduino software, is integrated:
#include LiquidCrystal theDisplay(12, 11, 5, 4, 3, 2);
Now the LiquidCrystal object is created with the name lcd. The digital output pins that have been used are given as parameters:
LiquidCrystal lcd(12, 11, 5, 4, 3, 2);
The display configuration is transferred in the setup. The two parameters represent the character number of a line and the number of lines. In this example, 16 characters and 2 lines:
lcd.begin(16, 2);
With print, messages can be written to the display:
lcd.print("Fabacademy 2017");
The setCursor command specifies the character to be displayed in the line and the line in which the text is to be output:
lcd.setCursor(0, 1);
With print, messages can be written to the display again:
lcd.print("M.Kellner");
Here is the Sketch I was using for my fabkit (its an example sketch from the Arduiono ide called liquid christal lib):
_116<<
Software I used:
| http://archive.fabacademy.org/fabacademy2017/fablabbottrophrw/students/64/week10.html | CC-MAIN-2021-49 | refinedweb | 1,803 | 68.91 |
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hello Community,
How can i do this one with script runner or any other way.
Scenario:
I have text custom field "project" and it has string limit of 16 characters or less(Done with script runner behaviors). When the user enter the input with any spaces eg: "project name" then its should throw an error for the space in the text field.
The final input should be only once character not two. eg : "projectname".
Hi @Antoine Berry Thank you for the script. Should i use the behaviors to execute this script or any other option.
You can save as is, the script since the getValue() method will return a string, you can use contains().
Yes behaviours is the right option, make sure you have updated the field id or use
getFieldById(getFieldChanged())
instead.
Antoine
Just adding to this, I used the script below to see if Summary field has spaces. (Summary is not a custom field and answer above was not working for me). Returns true/false for Simple Scripted Validator. Note the exclamation point in front.
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.config.IssueTypeManager
!issue.summary.contains(" "). | https://community.atlassian.com/t5/Marketplace-Apps-Integrations/How-to-validate-the-spaces-in-the-text-filed/qaq-p/1090214 | CC-MAIN-2021-31 | refinedweb | 207 | 57.37 |
NLP From Scratch: Classifying Names with a Character-Level RNN¶
Author: Sean Robertson
We will be building and training a basic character-level RNN to classify words. This tutorial, along with the following two, show how to do preprocess data for NLP modeling “from scratch”, in particular not using many of the convenience functions of torchtext, so you can see how preprocessing for NLP modeling works at a low level.
A character-level RNN reads words as a series of characters - outputting a prediction and “hidden state” at each step, feeding its previous hidden state into each next step. We take the final prediction to be the output, i.e. which class the word belongs to.
Specifically, we’ll train on a few thousand surnames from 18 languages of origin, and predict which language a name is from based on the spelling:
$ python predict.py Hinton (-0.47) Scottish (-1.52) English (-3.57) Irish $ python predict.py Schmidhuber (-0.19) German (-2.48) Czech (-2.68) Dutch RNNs and how they work:
- The Unreasonable Effectiveness of Recurrent Neural Networks shows a bunch of real life examples
- Understanding LSTM Networks is about LSTMs specifically but also informative about RNNs in general
Preparing the Data¶
Included in the
data/names directory are 18 text files named as
“[Language].txt”. Each file contains a bunch of names, one name per
line, mostly romanized (but we still need to convert from Unicode to
ASCII).
We’ll end up with a dictionary of lists of names per language,
{language: [names ...]}. The generic variables “category” and “line”
(for language and name in our case) are used for later extensibility.
from __future__ import unicode_literals, print_function, division from io import open import glob import os def findFiles(path): return glob.glob(path) print(findFiles('data/names/*.txt')) import unicodedata import string all_letters = string.ascii_letters + " .,;'" n_letters = len(all_letters) # Turn a Unicode string to plain ASCII, thanks to def unicodeToAscii(s): return ''.join( c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn' and c in all_letters ) print(unicodeToAscii('Ślusàrski')) # Build the category_lines dictionary, a list of names per language category_lines = {} all_categories = [] # Read a file and split into lines def readLines(filename): lines = open(filename, encoding='utf-8').read().strip().split('\n') return [unicodeToAscii(line) for line in lines] for filename in findFiles('data/names/*.txt'): category = os.path.splitext(os.path.basename(filename))[0] all_categories.append(category) lines = readLines(filename) category_lines[category] = lines n_categories = len(all_categories)
Out:
['data/names/French.txt', 'data/names/Czech.txt', 'data/names/Dutch.txt', 'data/names/Polish.txt', 'data/names/Scottish.txt', 'data/names/Chinese.txt', 'data/names/English.txt', 'data/names/Italian.txt', 'data/names/Portuguese.txt', 'data/names/Japanese.txt', 'data/names/German.txt', 'data/names/Russian.txt', 'data/names/Korean.txt', 'data/names/Arabic.txt', 'data/names/Greek.txt', 'data/names/Vietnamese.txt', 'data/names/Spanish.txt', 'data/names/Irish.txt'] Slusarski
Now we have
category_lines, a dictionary mapping each category
(language) to a list of lines (names). We also kept track of
all_categories (just a list of languages) and
n_categories for
later reference.
print(category_lines['Italian'][:5])
Out:
['Abandonato', 'Abatangelo', 'Abatantuono', 'Abate', 'Abategiovanni']
Turning Names into Tensors¶
Now that we have all the names organized, we need to turn them into Tensors to make any use of them.
To represent a single letter, we use a “one-hot vector” of size
<1 x n_letters>. A one-hot vector is filled with 0s except for a 1
at index of the current letter, e.g.
"b" = <0 1 0 0 0 ...>.
To make a word we join a bunch of those into a 2D matrix
<line_length x 1 x n_letters>.
That extra 1 dimension is because PyTorch assumes everything is in batches - we’re just using a batch size of 1 here.
import torch # Find letter index from all_letters, e.g. "a" = 0 def letterToIndex(letter): return all_letters.find(letter) # Just for demonstration, turn a letter into a <1 x n_letters> Tensor def letterToTensor(letter): tensor = torch.zeros(1, n_letters) tensor[0][letterToIndex(letter)] = 1 return tensor # Turn a line into a <line_length x 1 x n_letters>, # or an array of one-hot letter vectors def lineToTensor(line): tensor = torch.zeros(len(line), 1, n_letters) for li, letter in enumerate(line): tensor[li][0][letterToIndex(letter)] = 1 return tensor print(letterToTensor('J')) print(lineToTensor('Jones').size())
Out:
tensor([.]]) torch.Size([5, 1, 57])
Creating the Network¶
Before autograd, creating a recurrent neural network in Torch involved cloning the parameters of a layer over several timesteps. The layers held hidden state and gradients which are now entirely handled by the graph itself. This means you can implement a RNN in a very “pure” way, as regular feed-forward layers.
This RNN module (mostly copied from the PyTorch for Torch users tutorial) is just 2 linear layers which operate on an input and hidden state, with a LogSoftmax layer after the output.
import torch.nn as nn, input, hidden): combined = torch.cat((input, hidden), 1) hidden = self.i2h(combined) output = self.i2o(combined) output = self.softmax(output) return output, hidden def initHidden(self): return torch.zeros(1, self.hidden_size) n_hidden = 128 rnn = RNN(n_letters, n_hidden, n_categories)
To run a step of this network we need to pass an input (in our case, the Tensor for the current letter) and a previous hidden state (which we initialize as zeros at first). We’ll get back the output (probability of each language) and a next hidden state (which we keep for the next step).
input = letterToTensor('A') hidden = torch.zeros(1, n_hidden) output, next_hidden = rnn(input, hidden)
For the sake of efficiency we don’t want to be creating a new Tensor for
every step, so we will use
lineToTensor instead of
letterToTensor and use slices. This could be further optimized by
pre-computing batches of Tensors.
input = lineToTensor('Albert') hidden = torch.zeros(1, n_hidden) output, next_hidden = rnn(input[0], hidden) print(output)
Out:
tensor([[-2.8767, -2.8540, -3.0107, -2.8790, -2.8120, -2.8649, -2.8286, -2.9248, -2.9889, -2.9850, -2.9222, -2.8192, -2.7689, -2.9252, -2.8669, -2.8312, -2.9903, -2.9198]], grad_fn=<LogSoftmaxBackward>)
As you can see the output is a
<1 x n_categories> Tensor, where
every item is the likelihood of that category (higher is more likely).
Training¶
Preparing for Training¶
Before going into training we should make a few helper functions. The
first is to interpret the output of the network, which we know to be a
likelihood of each category. We can use
Tensor.topk to get the index
of the greatest value:
def categoryFromOutput(output): top_n, top_i = output.topk(1) category_i = top_i[0].item() return all_categories[category_i], category_i print(categoryFromOutput(output))
Out:
('Korean', 12)
We will also want a quick way to get a training example (a name and its language):
import random def randomChoice(l): return l[random.randint(0, len(l) - 1)] def randomTrainingExample(): category = randomChoice(all_categories) line = randomChoice(category_lines[category]) category_tensor = torch.tensor([all_categories.index(category)], dtype=torch.long) line_tensor = lineToTensor(line) return category, line, category_tensor, line_tensor for i in range(10): category, line, category_tensor, line_tensor = randomTrainingExample() print('category =', category, '/ line =', line)
Out:
category = Dutch / line = Koning category = Scottish / line = King category = Arabic / line = Dagher category = Korean / line = Choi category = Vietnamese / line = Dinh category = Italian / line = Leoni category = Dutch / line = Vennen category = Italian / line = Parrino category = Spanish / line = Mas category = Portuguese / line = Garcia
Training the Network¶
Now all it takes to train this network is show it a bunch of examples, have it make guesses, and tell it if it’s wrong.
For the loss function
nn.NLLLoss is appropriate, since the last
layer of the RNN is
nn.LogSoftmax.
criterion = nn.NLLLoss()
Each loop of training will:
- Create input and target tensors
- Create a zeroed initial hidden state
- Read each letter in and
- Keep hidden state for next letter
- Compare final output to target
- Back-propagate
- Return the output and loss
learning_rate = 0.005 # If you set this too high, it might explode. If too low, it might not learn def train(category_tensor, line_tensor): hidden = rnn.initHidden() rnn.zero_grad() for i in range(line_tensor.size()[0]): output, hidden = rnn(line_tensor[i], hidden) loss = criterion(output, category_tensor) loss.backward() # Add parameters' gradients to their values, multiplied by learning rate for p in rnn.parameters(): p.data.add_(p.grad.data, alpha=-learning_rate) return output, loss.item()
Now we just have to run that with a bunch of examples. Since the
train function returns both the output and loss we can print its
guesses and also keep track of loss for plotting. Since there are 1000s
of examples we print only every
print_every examples, and take an
average of the loss.
import time import math n_iters = 100000 print_every = 5000 plot_every = 1000 # Keep track of losses for plotting current_loss = 0 all_losses = [] def timeSince(since): now = time.time() s = now - since m = math.floor(s / 60) s -= m * 60 return '%dm %ds' % (m, s) start = time.time() for iter in range(1, n_iters + 1): category, line, category_tensor, line_tensor = randomTrainingExample() output, loss = train(category_tensor, line_tensor) current_loss += loss # Print iter number, loss, name and guess if iter % print_every == 0: guess, guess_i = categoryFromOutput(output) correct = '✓' if guess == category else '✗ (%s)' % category print('%d %d%% (%s) %.4f %s / %s %s' % (iter, iter / n_iters * 100, timeSince(start), loss, line, guess, correct)) # Add current loss avg to list of losses if iter % plot_every == 0: all_losses.append(current_loss / plot_every) current_loss = 0
Out:
5000 5% (0m 8s) 2.3270 Duncan / Irish ✗ (Scottish) 10000 10% (0m 16s) 2.3979 Lecuyer / German ✗ (French) 15000 15% (0m 24s) 3.9878 Botros / Greek ✗ (Arabic) 20000 20% (0m 33s) 1.9687 Mcgrady / Irish ✗ (English) 25000 25% (0m 41s) 2.8689 Matthams / Japanese ✗ (English) 30000 30% (0m 49s) 0.8416 Frolov / Russian ✓ 35000 35% (0m 58s) 0.4822 Yoo / Korean ✓ 40000 40% (1m 6s) 0.7357 Ebner / German ✓ 45000 45% (1m 14s) 2.3046 Penner / German ✗ (Dutch) 50000 50% (1m 22s) 0.9483 Esparza / Spanish ✓ 55000 55% (1m 31s) 1.6035 Yim / Korean ✗ (Chinese) 60000 60% (1m 39s) 1.7249 Meier / French ✗ (German) 65000 65% (1m 47s) 1.5586 Santiago / Japanese ✗ (Portuguese) 70000 70% (1m 56s) 1.5613 Hassel / Arabic ✗ (Dutch) 75000 75% (2m 4s) 1.6078 O'Donnell / Scottish ✗ (Irish) 80000 80% (2m 13s) 0.7473 Walentowicz / Polish ✓ 85000 85% (2m 22s) 1.6316 Eldridge / Italian ✗ (English) 90000 90% (2m 30s) 3.9132 Keeler / German ✗ (English) 95000 95% (2m 39s) 4.6763 Toal / Vietnamese ✗ (English) 100000 100% (2m 47s) 1.7800 Korycansky / Russian ✗ (Czech)
Evaluating the Results¶
To see how well the network performs on different categories, we will
create a confusion matrix, indicating for every actual language (rows)
which language the network guesses (columns). To calculate the confusion
matrix a bunch of samples are run through the network with
evaluate(), which is the same as
train() minus the backprop.
# Keep track of correct guesses in a confusion matrix confusion = torch.zeros(n_categories, n_categories) n_confusion = 10000 # Just return an output given a line def evaluate(line_tensor): hidden = rnn.initHidden() for i in range(line_tensor.size()[0]): output, hidden = rnn(line_tensor[i], hidden) return output # Go through a bunch of examples and record which are correctly guessed for i in range(n_confusion): category, line, category_tensor, line_tensor = randomTrainingExample() output = evaluate(line_tensor) guess, guess_i = categoryFromOutput(output) category_i = all_categories.index(category) confusion[category_i][guess_i] += 1 # Normalize by dividing every row by its sum for i in range(n_categories): confusion[i] = confusion[i] / confusion[i].sum() # Set up plot fig = plt.figure() ax = fig.add_subplot(111) cax = ax.matshow(confusion.numpy()) fig.colorbar(cax) # Set up axes)) # sphinx_gallery_thumbnail_number = 2 plt.show()
You can pick out bright spots off the main axis that show which languages it guesses incorrectly, e.g. Chinese for Korean, and Spanish for Italian. It seems to do very well with Greek, and very poorly with English (perhaps because of overlap with other languages).
Running on User Input¶
def predict(input_line, n_predictions=3): print('\n> %s' % input_line) with torch.no_grad(): output = evaluate(lineToTensor(input_line)) # Get top N categories topv, topi = output.topk(n_predictions, 1, True) predictions = [] for i in range(n_predictions): value = topv[0][i].item() category_index = topi[0][i].item() print('(%.2f) %s' % (value, all_categories[category_index])) predictions.append([value, all_categories[category_index]]) predict('Dovesky') predict('Jackson') predict('Satoshi')
Out:
> Dovesky (-0.48) Russian (-1.39) Czech (-3.06) Irish > Jackson (-0.49) Scottish (-1.72) English (-2.35) Russian > Satoshi (-0.68) Arabic (-1.03) Japanese (-3.15) Portuguese
The final versions of the scripts in the Practical PyTorch repo split the above code into a few files:
data.py(loads files)
model.py(defines the RNN)
train.py(runs training)
predict.py(runs
predict()with command line arguments)
server.py(serve prediction as a JSON API with bottle.py)
Run
train.py to train and save the network.
Run
predict.py with a name to view predictions:
$ python predict.py Hazaki (-0.42) Japanese (-1.39) Polish (-3.51) Czech
Run
server.py and visit to get JSON
output of predictions.
Exercises¶
- Try with a different dataset of line -> category, for example:
- Any word -> language
- First name -> gender
- Character name -> writer
- Page title -> blog or subreddit
- Get better results with a bigger and/or better shaped network
- Add more linear layers
- Try the
nn.LSTMand
nn.GRUlayers
- Combine multiple of these RNNs as a higher level network
Total running time of the script: ( 2 minutes 55.069 seconds)
Gallery generated by Sphinx-Gallery | https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial | CC-MAIN-2021-39 | refinedweb | 2,242 | 50.02 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.