text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hi everyone, I’m using Nvidia’s example on yolov3_onnx for python and I tried setting all the necessary fp16 parameters.
with trt.Builder(self._TRT_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(network, self._TRT_LOGGER) as parser: builder.max_workspace_size = 1 << 30 # 1GB builder.max_batch_size = 1 builder.fp16_mode = True builder.strict_type_constraints= True
I’ve even set each layer to the desired data type:
def _show_network(self, network): for index in range(network.num_layers): layer = network.get_layer(index) layer.precision = trt.float16 for idx in range(layer.num_outputs): layer.set_output_type(idx, trt.float16)
I’m getting the intended speed up on inferencing but what I’m curious about is the runtime memory size for the GPU. Using TensorRT with the fp16 setting unset, the memory usage I get (nvidia-smi) is around 785MB. I was surprised to see that even when I had set all the fp16 settings, the output from nvidia-smi showed that the memory usage is still at 785MB. Is this what I should be seeing? All the while, I thought TensorRT (specifically FP16) will help to reduce the GPU memory usage of the network?
FYI, here are some specs from my system:
X-server (Ubuntu 18.04)
P100 GPU (Driver: 410.104) [1]
CUDA 10.0
TensorRT 5.1.2
YOLOv3
Let me know if you need more information (although I can’t provide the models since they’re confidential; for a client we have). Thanks in advance! | https://forums.developer.nvidia.com/t/does-mixed-precision-reduce-runtime-memory-size/75858 | CC-MAIN-2020-40 | refinedweb | 242 | 61.63 |
-2018 08:07 AM
I have created an empty Linux project in SDK 2018.2. I am running SDK in Windows 10. I am using a file that has the following libs
#include <unistd.h> #include <fcntl.h> #include <libudev.h>
When I try and compile I get warnings on the first two and an error on the last. How would I resolve this? Do I need to get the include files and then point to them? Trouble is that I am working in Windows and these are Linux files. Thanks
08-29-2018 07:11 PM
Hi @beandigital,
XSDK doesn't provide all the linux package libraries to use in applications now you have two options.
08-29-2018 11:57 PM
I am a bit confused :). I have Vivado and SDK on my Win 10 machine. I then have a VM running Ubuntu that has Petalinux on it. I have got Linux running on my board and managed to create a Hello World in SDK (on Win 10) that I can run on the board. So do I need to run SDK on Ubuntu to get the libs working? Or can I do something to use the libs with SDK on Win 10?
Thanks
08-30-2018 02:38 AM
Ok, so the flow you would use is (as Sandeep mentioned) in Petalinux to generate the sysroots is:
If you are using the SDK in Windows, then you would need to copy this folder (sysroots) local to your windows machine
Then to create a Linux Application in SDK:
File -> New -> Application Project
In the Linux System Root point to the aarch64-xilinx-linux folder (assuming this is Zynq Ultrascale):
Then either use the Hello World template, or Empty application and import your source code.
Right click on your application, and add the --sysroot Linker flag:
Then add any supporting libraries (for example here I am using the RFDC so I needed the libmetal, and rfdc and math libs):
Let me know if this helps?
08-30-2018 03:17 AM
09-05-2018 08:05 AM
When I try to copy the sysroots folder I get errors :-
There was an error copying the file into /mnt/hgfs/Shared/sysroots/x86_64-petalinux-linux/lib.
error making symbolic link operation not supported
09-05-2018 10:20 AM
09-06-2018 05:24 AM
Does nobody know how to fix this problem? If I look in my project directory for the Linux project the files seem to be there. Also if I create a new project using the SDK, it says that the sysroot dir should be ../sysroot/stage. But there isnt a stage directory.
10-13-2018 01:59 PM
i was facing the same problem. but now Thank you so much for this. I was into this issue and tired to tinker around to check if its possible but couldnt get it done. Now that i have seen the way you did it, thanks guys
with
regards
10-15-2018 07:57 AM
Petalinux --sdk always fails'...
| GEN hw/mem/trace.h
|'. Retry scheduled
|'...
| GEN hw/i386/trace.h
| fatal: Unable to look up git.qemu.org (port 9418) (Name or service not known)
| fatal: clone of 'git://git.qemu.org/keycodemapdb' failed
| Failed to clone 'ui/keycodemapdb'. Retry scheduled
| GEN hw/i386/xen/trace.h
|' a second time, aborting
| ./scripts/git-submodule.sh: failed to update modules
|
| Unable to automatically checkout GIT submodules ' ui/keycodemapdb capstone'.
| If you require use of an alternative GIT binary (for example to
| enable use of a transparent proxy), then please specify it by
| running configure by with the '--with-git' argument. e.g.
|
| $ ./configure --with-git='tsocks git'
|
| Alternatively you may disable automatic GIT submodule checkout
| with:
|
| $ ./configure --disable-git-update'
|
| and then manually update submodules prior to running make, with:
|
| $ scripts/git-sbumodule.sh update ui/keycodemapdb capstone
|
| GEN hw/9pfs/trace.h
| make: *** [Makefile:39: git-submodule-update] Error 1
10-15-2018 06:36 PM
Hi @russellsnow,
The qemu nativesdk errors are due to your network proxy settings are not configure please check your proxy settings.
01-10-2019 02:40 PM
Are there any options for this error if I don't have any network access on the build machine?
05-08-2019 04:21 AM
So you can include dmaxxx.h? | https://forums.xilinx.com/t5/Embedded-Linux/Compiling-an-app-in-SDK-with-missing-libs/m-p/887685/highlight/true | CC-MAIN-2019-43 | refinedweb | 725 | 72.26 |
Some one on a mailing list pointed out this cartoon and thought I’d
share it with people:
Some one on a mailing list pointed out this cartoon and thought I’d
share it with people:
On the way home, someone asked me where I wanted to go when I died.
My initial reaction was Tahiti. After thinking about it, I’d quite like
to see New Zealand.
Java.
Imagine you’ve got some text you’ve been told is ASCII and you’ve
told java that it’s ASCII using:
Reader reader = new InputStreamReader(inputstream, "ASCII");
Imagine your surprise when it happily reads in non-ascii values, say
UTF-8 or ISO8859-1, and converts them to a random character.
import java.io.*; public class Example1 { public static void main(String[] args) { try{ FileInputStream is = new FileInputStream(args[0]); BufferedReader reader = new BufferedReader(new InputStreamReader(is, args[1])); String line; while ((line = reader.readLine()) != null) { System.out.println(line); } } catch (Exception e) { System.out.println(e); } } }
beebo david% java Example1 utf8file.txt ascii I��t��rn��ti��n��liz��ti��n beebo david% java Example1 utf8file.txt utf8 Iñtërnâtiônàlizætiøn
So, I hear
you ask, how do you get Java to be strict about the conversion. Well, answer
is to lookup a Charset object, ask it for a CharsetDecoder object and
then set the onMalformedInput option to
CodingErrorAction.REPORT. The resulting code is:
import java.io.*; import java.nio.charset.*; public class Example2 { public static void main(String[] args) { try{ FileInputStream is = new FileInputStream(args[0]); Charset charset = Charset.forName(args[1]); CharsetDecoder csd = charset.newDecoder(); csd.onMalformedInput(CodingErrorAction.REPORT); BufferedReader reader = new BufferedReader(new InputStreamReader(is, csd)); String line; while ((line = reader.readLine()) != null) { System.out.println(line); } } catch (Exception e) { System.out.println(e); } } }
This time when we run it,we get:
beebo david% java Example2 utf8file.txt ascii java.nio.charset.MalformedInputException: Input length = 1 beebo david% java Example2 utf8file.txt utf8 Iñtërnâtiônàlizætiøn
On a slightly related note, if anyone knows how to get Java to decode
UTF32, VISCII, TCVN-5712, KOI8-U or KOI8-T, I would love to know.
Update: (2007-01-26) Java 6 has support for UTF32
and KOI8-U.% --
Just a quick one. If you’ve ever created a table using the number of
seconds since 1970 and realised, after populating it with data, that you
really need it in a TIMESTAMP type? If so, you can quickly convert it
using this SQL:
ALTER TABLE entries ALTER COLUMN created TYPE TIMESTAMP WITH TIME ZONE USING TIMESTAMP WITH TIME ZONE 'epoch' + created *interval '1 second';
With thanks to the PostgreSQL
manual for saving me hours working out
how to do this.
Does your Oracle client hang when connecting? Are you using Oracle
10.2.0.1? Do you get the
following if you strace the process?
gettimeofday({1129717666, 622797}, NULL) = 0 access("/etc/sqlnet.ora", F_OK) = -1 ENOENT (No such file or directory) access("./network/admin/sqlnet.ora", F_OK) = -1 ENOENT (No such file or directory) access("/etc/sqlnet.ora", F_OK) = -1 ENOENT (No such file or directory) access("./network/admin/sqlnet.ora", F_OK) = -1 ENOENT (No such file or directory) fcntl64(155815832, F_SETFD, FD_CLOEXEC) = -1 EBADF (Bad file descriptor) times(NULL) = -1808543702 times(NULL) = -1808543702 times(NULL) = -1808543702 times(NULL) = -1808543702 times(NULL) = -1808543702 times(NULL) = -1808543702 times(NULL) = -1808543702 . . .
Has your client been up for more that 180 days? Well done; you’ve
just come across the same bug that has bitten two of our customers in
the last week. Back in the days of Oracle 8, there was a fairly imfamous
bug in the Oracle client where new connections would fail if the client had
been up for 248 days or more. This got fixed, and wasn’t a problem with
Oracle 9i at all. Now Oracle have managed to introduce a similar bug in
10.2.0.1, although in my experience the number of days appears to be
shorter (180+).
Thankfully, this has been fixed in the 10.2.0.2
Instant Client. More information can be found on forums.oracle.com.
Just. | https://www.davidpashley.com/page/9/ | CC-MAIN-2019-04 | refinedweb | 685 | 56.76 |
From: Moore, Dave (dmoore_at_[hidden])
Date: 2002-08-05 11:59:46
> My complaints on the ifdefs were on the underside of the thread. C++ alone
provides plenty of
> mechanisms for mapping the common interface to different platforms, with
typesafety (instead of
> the reinterpret_casts<>!)--again, the topic of a different discussion, at
which point it would be
> easier just to submit a replacement as example.
Fair enough, but it's a hard problem to solve w/o resorting to (1)
#including os-specific files like <windows.h> in the interface which wreak
havoc on namespaces, macros, etc., or (2) having a complete shadow set of
dynamically allocated Pimpl classes. It's always interesting to see
alternative approaches, though.
> There is no reason to make users of thread handle their own exceptions
because there is only one
> mechanism to do it, and every user will have to repeat this code in their
implementation.
> in thread processes have always been ambiguous to me, it seems that no
framework out there
> supportws them properly and just expects them to be caught and sometimes
ignored.
> If you'll take note of the advanced_thread usage, it not provides a
generic way to run any
> boost::function asynchronously, get its return value and deal with
exceptions it may throw--a
>complete solution and something currently not yet available. Of course one
could take the
>perspective that advanced_thread is really an asynchronous function
adapter, but that deprecates the >need to support what is currently offered
by boost::thread--which is why I suggested a possible
>replacement.
I agree with your statements about reuse and flexibitly for asynchronous
function calls and flexibility in handling exceptions. I'm just not sure
that an inheritance from (or replacement of) boost:thread is the best
solution.
Consider an alternative: a thread_pool object which can enqueue
boost::function calls and distribute them to a managed pool of threads ready
to execute them. boost::function gives you the hook to implement -any-
scenario of argument passing, return value capture, and exception handling
you wish. This is a problem (IMHO) that suggests a solution via composition
of two existing classes, not inheritance.
See:
For a draft of a solution which may (hopefully) find its way into
Boost.Threads.
Regards,
Dave
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2002/08/32893.php | CC-MAIN-2019-22 | refinedweb | 398 | 53.51 |
Key:no:
Description
The no: lifecycle prefix can be added to tags that relate to features that don't exist but have an high probability to be re-added by a non surveyed edit because they can be seen on commonly used imagery or import sources. In most cases, in OSM, if you find a object in the database that don't exist on the ground, whether it existed or not, is an error or not, you should just delete it. But in some rare circumstances, when those features still appear in outdated sources still used to enter data in osm (old imagery, import sources, etc.) you can use this tag to warn other mappers not to re-add those features.
How to tag
Add the namespace "no:" to all keys which are no longer relevant to the current state of an object without changing the geometry (let it a node if it was a node, a way if it was a way). You should treat all the tags on an object as a set of facts about the object, and prefix the keys of those facts which are no longer true as a result of being non existent.
Adding a note=* to explain why you just didn't delete the geometry is highly recommended, else, someone might just delete everything again.
why ?
This tag might be later used to detect intersection with other geometries to detect added feature that shouldn't have been re-added
usage of (maybe) similar concepts
- removed:* ~1000 use in the db as of 11/2014. Most tag used are removed:power=* and removed:design=*
- demolished:* ~1000 use in the db as of 11/2014.
- demolished_* ~50 use in the db as of 11/2014.
- no:* ~100 use in the db as of 11/2014.
- was:* ~4500 use in the db as of 11/2014.
- was_* ~30 use in the db as of 11/2014.
- old:* ~200 use in the db as of 11/2014.
- former_* ~500 former_name and 100 former_* use in the db as of 11/2014.
- former: ~100 use in the db as of 11/2014.
- gone:* ~40 use in the db as of 11/2014.
Alternatives for the purpose of this tag
It should be possible to use the history of deleted objects to detect if they are re-added by someone. However no practical tools to query history have been made available. Therefore, this solution is a temporary workaround that could later be replaced by history based comparison.
Notes could be added to warn people not to add deleted objects. | https://wiki.openstreetmap.org/wiki/Key:no: | CC-MAIN-2020-45 | refinedweb | 430 | 77.06 |
Which is the good website for struts 2 tutorials?
Which is the good website for struts 2 tutorials? Hi,
After... for learning Struts 2.
Suggest met the struts 2 tutorials good websites.
Thanks
Hi,
Rose India website is the good
Connectivity with sql in detail - JDBC
unable to connect the sql with Java. Please tell me in detail that how to connect...-java-5.0.5.jar in the lib folder of jdk and try the following code:
import java.sql.*;
public class MysqlConnect{
public static void main(String[] args
Struts Quick Start
and
then maps the incoming request to a Struts actionclass. The Struts action class... of the application fast.
Read more: Struts Quick
Start...Struts Quick Start
Struts Quick Start to Struts technology
In this post I
Nested try
different answers pl help me with this code
class Demo
{
static void nestedTry(String args[])
{
try
{
int a = Integer.parseInt(args[0]);
int b...[])
{
try {
nestedTry(args);
}
catch (ArithmeticException Books
for more experienced readers eager to exploit Struts to the fullest.
... it. Instead, it is intended as a Struts Quick Start Guide to get you going. Once you are rolling, you can get more details from the Jakarta Struts documentation or one
getting radio button at start of each row of table - Struts
getting radio button at start of each row of table i have done... at the start of each row as u saw in above output.
So what should i have to do... Friend,
Try the following:
Thanks
I want detail information about switchaction? - Struts
I want detail information about switch action? What is switch action in Java? I want detail information about SwitchAction
iPhone Detail Disclosure Button
iPhone Detail Disclosure Button
In iPhone based applications, detail... it .. it'll bring up the detail information about the item in list.
We can also say... = UITableViewCellAccessoryDisclosureIndicator;"
There are two more
Struts Roseindia
JavaBean is used to input properties in action class
Struts 2 actions can... support validation and localization of coding offering more
utilization.
Struts... the execution of Action.
Features of Struts 2
Simple and easy web app
Java Web Start and Java Plug-in
Java Web Start Enhancements in version 6
... Java Web
Start should check for updates on the web, and what to do when....
Prior to Java SE 6, In Java Web Start <offline-allowed> element
Struts Is Action class is thread safe in struts? if yes, how it is thread safe? if no, how to make it thread safe? Please give me with good... variables.
For more information, visit the following link:
http
Struts - Framework
Struts Good day to you Sir/madam,
How can i start struts application ?
Before that what kind of things necessary... using the View component. ActionServlet, Action, ActionForm and struts-config.xml
struts - Struts
.
Struts only reads the struts.config.xml file upon start up.
Struts-config.xml
Action Entry:
Difference between Struts-config.xml...struts hi,
what is meant by struts-config.xml and wht are the tags
Can you suggest any good book to learn struts
Can you suggest any good book to learn struts Can you suggest any good book to learn struts
SPRING ... A JUMP START
SPRING ... A JUMP START
-----------------------
by Farihah Noushene... context, multipart resolver, Struts
support, JSF support and web utilities.
10...
public class helloimpl implements hello
{
private String greeting
Struts Articles
.
4. The UI controller, defined by Struts' action class/form bean... application. The example also uses Struts Action framework plugins in order to initialize the scheduling mechanism when the web application starts. The Struts Action
Java Web Start Enhancements in version 6
Java Web Start Enhancements in version 6
... supported. It describes the applications preferences for how Java Web
Start...;
Prior to Java SE 6, In Java Web Start <offline-allowed> element
Struts - Struts
,
For read more information,Tutorials and Examples on Struts visit to :
http... in struts 1.1
What changes should I make for this?also write struts-config.xml... javax.servlet.http.*;
import java.io.*;
import java.sql.*;
public class loginservlet
how to send contact detail in email
how to send contact detail in email hi...all of u.....i am work... problem...frnd how to send a contact form detail mail on a click submit button...;td
<form method="POST" action="mail.jsp">
<table
Nested try catch
be written in
the try block. If the exceptions occurs at that particular block then it
will be catch by the catch block. We can have more than one try/catch...Nested try catch
more than one struts-config.xml file
more than one struts-config.xml file Can we have more than one struts-config.xml file for a single Struts application
Multiple try catch
be written in
the try block. If the exceptions occurs at that particular block then it
will be catch by the catch block. We can have more than one try...Multiple try catch
Java try, catch, and finally
more than one catch clause in a single try block.
Simply an erroneous code...
Java try, catch, and finally
The try, catch, and finally keywords are Java keywords
Regarding struts validation - Struts
Regarding struts validation how to validate mobile number field should have 10 digits and should be start with 9 in struts validation? Hi...
-------------------------------------------------------------
For more information :
illegal start of type
illegal start of type Hi, This is my code i m getting illegal start... this error.
public class WriteByteArrayToFile
{
public static void main(String...//s.excel";
try
{
FileOutputStream fos = new FileOutputStream(strFilePath
Struts
Struts Tell me good struts manual - Framework
Struts Good day to you Sir/madam,
How can i start struts application ?
Before that what kind of things necessary.../struts/". Its a very good site to learn struts.
You dont need to be expert
More About Triggers
More About Triggers
In this section we will try to provide the brief description... of the org.quartz.impl.HolidayCalendar
class. The Calendar object integrated
Java Kick Start - Java Beginners
Java Kick Start Hello Sir, i like to become a good developer in Java. Im good in JAVA
The try-with-resource Statement
The try-with-resource Statement
In this section, you will learn about newly added try-with-resource statement in
Java SE 7.
The try-with-resource statement contains declaration of one or more
resources. As you know, prior
Struts Project Planning - Struts
Struts Project Planning Hi all,
I am creating a struts application... eveything which is nessesary.
Please explain me in detail which phases should i go...,
I am sending you a link. This link will help you.
Please visit for more
delete retailer jsp file (sir..is this a good logic.. jsp file is useful for what purpose)
delete retailer jsp file (sir..is this a good logic.. jsp file is useful...="content-table">
<tr>
<th rowspan="3" class="sized"><img src...;
<th class="topleft"></th>
<td id="tbl-border-top">
Java error illegal start of type
Java error illegal start of type
The Java error illegal start of type... after the close of try block.
Understand with Example
In this tutorial
How to Start Outsourcing, Great Outsourcing Tips
step towards a great outsourcing venture is to start out right. Here we detail...How to Start Outsourcing?
How to Begin Outsourcing? Great Outsourcing Tips to Start Outsourcing!
Outsourcing is always advantageous
login controller.servlet file.. (good coding stuff for reference)
login controller.servlet file.. (good coding stuff for reference) ...;
/**
* Servlet implementation class LoginController
*/
public class LoginController... ServletException, IOException {
// TODO Auto-generated method stub
try
illegal start of expression in servlet error..
illegal start of expression in servlet error.. hello Sir,
here is my servlet code and i am getting illegal start of expression error... Sir.
public class edit extends HttpServlet {
protected void processRequest
Implementing more than one Job Details and Triggers
Implementing more than one Job Details and Triggers... will learn how to implement
more than one triggers and jobs with a quartz... of more than one
job details and triggers.
Description of program:
Here, we
struts
, Richard Hightower
3)Struts in Action By Ted N. Husted, Cedric Dumoulin, George Franciscus, David Winterfeldt
4)Struts Kick Start By: James Turner; Kevin...struts which is the best book to study struts on own?
please tell me
Java - Struts
Java
Hi Good Morning,
This is chandra Mohan
I have a problem in DispatchAction in Struts.
How can i pass the method name in "action... more the one action button in the form using Java script.
please give me
illegal start of expression - Java Beginners
illegal start of expression here i attach my program.. which shows illgal start of expression.. plz aynone will silve this particular problem... javax.imageio.*;
import javax.swing.*;
public class Image extends JFrame
struts
struts how to start struts?
Hello Friend,
Please visit the following links:... can easily learn the struts.
Thanks
load more with jquery
load more with jquery i am using jquery to loadmore "posts" from my... box its is going to display php posts and after that when i click on load more...);
$('#loadmorebutton').html('Load More
more circles - Java Beginners
more circles Write an application that uses Circle class you created... in the object.
Hi Friend,
Try the following code:
import java.util.*;
class Circle{
static double pi=3.14;
double radius;
Circle
Struts Alternative
/PDF/more
Automatic serialization of the ActionErrors, Struts... of action.
Compared to WebWork, Spring has more differentiated object roles... properties of the respective Action class. Finally, the same Action instance
Writing more than one cards in a WML deck.
Writing more... as a navigational help.
With the help of do-element user can start a action on
the displayed card. Some of the uses of do
Struts File Upload Example - Struts
/struts/strutsfileupload.shtml
i have succeeded. but when i try to upload file...Struts File Upload Example hi,
when i tried the struts file... any file with size say more than 500MB , what is the solution.
please reply
struts
struts Hi,
Here my quation is
can i have more than one validation-rules.xml files in a struts application:
start date and end date validation in javascript - Ajax
start date and end date validation in javascript hi, i am doing web surfing project. pls guide me for my project. i want start date and end validations in javascript. end date should be greater than start date if so less than 1 Tutorial and example programs
to the Struts Action Class
This lesson is an introduction to Action Class...Struts 1 Tutorials and many example code to learn Struts 1 in detail.
Struts 1...
and reached end of life phase. Now you should start learning the
Struts 2 framework
Error - Struts
Error Hi,
I downloaded the roseindia first struts example and configured in eclips.
It is working fine. But when I add the new action and I create the url for that action then
"Struts Problem Report
Struts has detected
Servlet - Struts
Servlet Can I can my action class from servlet?
If yes, then how... will help you please visit for more information:... in detail.
Thanks.
Amardeep
Struts validation not work properly - Struts
& age). for the rest, when i try to put 1 more validation (let say for email), i... to advise, i face it for almost a week now...
this is my action class...
========================================
public class ExampleAction extends Action
java - Struts
in which one can add modules and inside those modules some more options please give me idea how to start with Hi Friend,
Please clarify what do
Hello - Struts
the input data or desired format changes, when it might be more convenient to the end user to change the detail by some means outside the program.
for ex.
class
Integrate Struts, Hibernate and Spring
Integrate Struts, Hibernate and Spring
... are using one of the
best technologies (Struts, Hibernate and Spring). This tutorial is very good if
you want to learn the process of integrating these technologies
Struts Console
files
Come to more detail:... Struts Console
The Struts Console is a FREE standalone Java Swing
struts
*;
import org.apache.struts.action.*;
public class LoginAction extends Action...struts <p>hi here is my code can you please help me to solve...;
<p><html>
<body></p>
<form action="login.do">
RegisterAction extends Action
{
public RegisterAction()
{
try...
}//execute
}//class
struts-config.xml
<struts...struts <p>hi here is my code in struts i want to validate my
Struts2 - Struts
me code and explain in detail.
I am sending you a link. This link will help you.
Please visit for more information.... work,
see code below:-
in the Class
even more circles - Java Beginners
even more circles Write an application that compares two circle objects.
? You need to include a new method equals(Circle c) in Circle class... is printed.
Hi Friend,
Try the following code:
import java.util.
Explain about threads:how to start program in threads?
Explain about threads:how to start program in threads? import java.util.*;
class AlphabetPrint extends Thread
{
public void print...();
}
}
class NumberPrinter extends Thread
{
public void print.
Subset Tag (Control Tags) Example Using Start
Subset Tag (Control Tags) Example Using Start
... the start parameter. The start parameter is of integer
type. It indicates... the following code snippet into the struts.xml
file.
struts.xml
<action
salaes detail
persoanl detail
sales detail
RetController.java (do get) (my file for reference for a test.. IS LOGIC good Enough ?
RetController.java (do get) (my file for reference for a test.. IS LOGIC good Enough ? try
{
Connection conn=Create...("View"))
{
try {
ArrayList<RetailerBean>
Try it Editor
Try it Editor Hello sir...actually i want to add an html,css & js editor like in w3 school try it editor 1.5....can you tell me how i can add it..pllz plzzz rppy soon
Struts - Struts
for more information.
Thanks...Struts Hi,
I m getting Error when runing struts application.
i...
/WEB-INF/struts-config.xml
1
try catch
try catch why following code gives compile time error.please reply.
class ThreadDemo1 extends Thread
{
public void run()
{
for(int i=1;i<=3;i++)
{
System.out.println(i);
try
Struts - Struts
Struts Hi All,
Can we have more than one struts-config.xml... in Advance.. Yes we can have more than one struts config files..
Here we use SwitchAction. So better study to use switchaction class
Nested try
versa.pl explain me
class Demo
{
static void nestedTry(String args[])
{
try... static void main(String args[])
{
try
{ nestedTry(args);
}
catch
Struts - Struts
.
Struts1/Struts2
For more information on struts visit to : Hello
I like to make a registration form in struts inwhich Reference
Struts Reference
Welcome to the Jakarta Online Reference page, you will find everything you
need to know to quick start your Struts Project. Here we are providing you
detailed Struts Reference. This online struts
struts ebook
struts ebook please suggest a good ebook for struts>
Struts Projects
solutions:
EJBCommand
StrutsEJB offers a generic Struts Action class...
Registration Action Class and DAO code
In this section we will explain how to write code for action class and
code for saving data into database
Latest Version of Struts Framework
Complete detail and tutorials about the Latest Version of Struts Framework
In this page we are listing the Latest Version of Struts Framework which is
being... and eye on the Latest Version of Struts Framework. You
should try to learn
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/611 | CC-MAIN-2015-32 | refinedweb | 2,573 | 68.06 |
Java Basics are necessary to learn by programmers who are willing to learn Java language. These basics must be followed every-time a program is made in Java.
Java language is completely specified that helps the programmer in making codes quickly. All data-type sizes and formats are already defined. Java class library is available on any machine with a Java runtime system. Java language is secure. Java language is robust that means it is designed to avoid crashes.
While writing Java programs, a programmer must keep following points in mind:
Following are the Java keywords that are used in methods:
Data-types in Java are classified in two types:
Different types of Variables in Java:
Here is an example of a simple Java program:
public class HelloWord { public static void main(String[] args) { System.out.println("HelloWord"); } }
OutPut:-
HelloWord
Advertisements
Posted on: May basics
Post your Comment | http://roseindia.net/java/javatutorial/java-basics.shtml | CC-MAIN-2015-48 | refinedweb | 148 | 55.54 |
Programs using the PBC library should include the file
pbc.h:
#include <pbc.h>
and linked against the PBC library and the GMP library, e.g.
$ gcc program.c -L. -lpbc -lgmp
The file
pbc.h already includes
gmp.h.
PBC follows GMP in several respects:
- Output.
Since the PBC library is built on top of GMP, the GMP types
are available. PBC types are similar to GMP types. The
following example is paraphrased from an example in the GMP
manual, and shows how to declare the PBC data type
element_t.
element_t sum; struct foo { element_t x, y; }; element_t vec[20];
GMP has the
mpz_t type for
integers,
mpq_t for rationals and
so on. In contrast, PBC uses the
element_t data type for elements of different
algebraic structures, such as elliptic curve groups, polynomial
rings and finite fields. Functions assume their inputs come
from appropriate algebraic structures.
PBC data types and functions can be categorized as follows. The first two alone suffice for a range of applications.
element_t: elements of an algebraic structure.
pairing_t: pairings where elements belong; can initialize from sample pairing parameters bundled with PBC in the
paramsubdirectory.
pbc_param_t: used to generate pairing parameters.
pbc_cm_t: parameters for constructing curves via the CM method; sometimes required by
pbc_param_t.
field_t: algebraic structures: groups, rings and fields; used internally by
pairing_t.
- a few miscellaneous functions, such as ones controlling how random bits are generated.
Functions operating on a given data type usually have the
same prefix, e.g. those involving
element_t objects begin with
element_. | http://crypto.stanford.edu/pbc/manual/ch01s04.html | CC-MAIN-2015-22 | refinedweb | 254 | 51.65 |
How do I convert a
char to an
int in C and C++?
Solution 1
Depends on what you want to do:
to read the value as an ascii code, you can write
char a = 'a'; int ia = (int)a; /* note that the int cast is not necessary -- int ia = a would suffice */
to convert the character
'0' -> 0,
'1' -> 1, etc, you can write
char a = '4'; int ia = a - '0'; /* check here if ia is bounded by 0 and 9 */
Explanation:
a - '0' is equivalent to
((int)a) - ((int)'0'), which means the ascii values of the characters are subtracted from each other. Since
0 comes directly before
1 in the ascii table (and so on until
9), the difference between the two gives the number that the character
a represents.
Solution 2
Well, in ASCII code, the numbers (digits) start from 48. All you need to do is:
int x = (int)character - 48;
Or, since the character '0' has the ASCII code of 48, you can just write:
int x = character - '0'; // The (int) cast is not necessary.
Solution 3
C and C++ always promote types to at least
int. Furthermore character literals are of type
int in C and
char in C++.
You can convert a
char type simply by assigning to an
int.
char c = 'a'; // narrowing on C int a = c;
Solution 4
char is just a 1 byte integer. There is nothing magic with the char type! Just as you can assign a short to an int, or an int to a long, you can assign a char to an int.
Yes, the name of the primitive data type happens to be "char", which insinuates that it should only contain characters. But in reality, "char" is just a poor name choice to confuse everyone who tries to learn the language. A better name for it is int8_t, and you can use that name instead, if your compiler follows the latest C standard.
Though. For example, the following code will work perfectly:
int str[] = {'h', 'e', 'l', 'l', 'o', '\0' }; for(i=0; i<6; i++) { printf("%c", str[i]); }
You have to realize that characters and strings are just numbers, like everything else in the computer. When you write 'a' in the source code, it is pre-processed into the number 97, which is an integer constant.
So if you write an expression like
char ch = '5'; ch = ch - '0';
this is actually equivalent to
char ch = (int)53; ch = ch - (int)48;
which is then going through the C language integer promotions
ch = (int)ch - (int)48;
and then truncated to a char to fit the result type
ch = (char)( (int)ch - (int)48 );
There's a lot of subtle things like this going on between the lines, where char is implicitly treated as an int.
Solution 5
(This answer addresses the C++ side of things, but the sign extension problem exists in C too.)
Handling all three
char types (
signed,
unsigned, and
char) is more delicate than it first appears. Values in the range 0 to
SCHAR_MAX (which is 127 for an 8-bit
char) are easy:
char c = somevalue; signed char sc = c; unsigned char uc = c; int n = c;
But, when
somevalue is outside of that range, only going through
unsigned char gives you consistent results for the "same"
char values in all three types:
char c = somevalue; signed char sc = c; unsigned char uc = c; // Might not be true: int(c) == int(sc) and int(c) == int(uc). int nc = (unsigned char)c; int nsc = (unsigned char)sc; int nuc = (unsigned char)uc; // Always true: nc == nsc and nc == nuc.
This is important when using functions from ctype.h, such as
isupper or
toupper, because of sign extension:
char c = negative_char; // Assuming CHAR_MIN < 0. int n = c; bool b = isupper(n); // Undefined behavior.
Note the conversion through int is implicit; this has the same UB:
char c = negative_char; bool b = isupper(c);
To fix this, go through
unsigned char, which is easily done by wrapping ctype.h functions through safe_ctype:
template<int (&F)(int)> int safe_ctype(unsigned char c) { return F(c); } //... char c = CHAR_MIN; bool b = safe_ctype<isupper>(c); // No UB. std::string s = "value that may contain negative chars; e.g. user input"; std::transform(s.begin(), s.end(), s.begin(), &safe_ctype<toupper>); // Must wrap toupper to eliminate UB in this case, you can't cast // to unsigned char because the function is called inside transform.
This works because any function taking any of the three char types can also take the other two char types. It leads to two functions which can handle any of the types:
int ord(char c) { return (unsigned char)c; } char chr(int n) { assert(0 <= n); // Or other error-/sanity-checking. assert(n <= UCHAR_MAX); return (unsigned char)n; } // Ord and chr are named to match similar functions in other languages // and libraries.
ord(c) always gives you a non-negative value even when passed a negative
char or negative
signed char and
chr takes any value
ord produces and gives back the exact same
char.
In practice, I would probably just cast through
unsigned char instead of using these, but they do succinctly wrap the cast, provide a convenient place to add error checking for
int-to-
char, and would be shorter and more clear when you need to use them several times in close proximity.
Solution 6
Use
static_cast<int>:
int num = static_cast<int>(letter); // if letter='a', num=97
Edit: You probably should try to avoid to use
(int)
int num = (int) letter;
check out Why use static_cast<int>(x) instead of (int)x? for more info.
Solution 7
I have absolutely
null skills in C, but for a simple parsing:
char* something = "123456"; int number = parseInt(something);
...this worked for me:
int parseInt(char* chars) { int sum = 0; int len = strlen(chars); for (int x = 0; x < len; x++) { int n = chars[len - (x + 1)] - '0'; sum = sum + powInt(n, x); } return sum; } int powInt(int x, int y) { for (int i = 0; i < y; i++) { x *= 10; } return x; }
Solution 8
It sort of depends on what you mean by "convert".
If you have a series of characters that represents an integer, like "123456", then there are two typical ways to do that in C: Use a special-purpose conversion like atoi() or strtol(), or the general-purpose sscanf(). C++ (which is really a different language masquerading as an upgrade) adds a third, stringstreams.
If you mean you want the exact bit pattern in one of your
int variables to be treated as a
char, that's easier. In C the different integer types are really more of a state of mind than actual separate "types". Just start using it where
chars are asked for, and you should be OK. You might need an explicit conversion to make the compiler quit whining on occasion, but all that should do is drop any extra bits past 256.
Solution 9
Presumably you want this conversion for using functions from the C standard library.
In that case, do (C++ syntax)
typedef unsigned char UChar; char myCppFunc( char c ) { return char( someCFunc( UChar( c ) ) ); }
The expression
UChar( c ) converts to
unsigned char in order to get rid of negative values, which, except for EOF, are not supported by the C functions.
Then the result of that expression is used as actual argument for an
int formal argument. Where you get automatic promotion to
int. You can alternatively write that last step explicitly, like
int( UChar( c ) ), but personally I find that too verbose.
Cheers & hth.,
Solution 10
I recomend to use the following function:
/* chartoint: convert char simbols to unsigned int*/ int chartoint(char s[]) { int i, n; n = 0; for (i = 0; isdigit(s[i]); ++i){ n = 10 * n + (s[i] - '0'); } return n; }
The result of function could be checked by:
printf("char 00: %d \r\n", chartoint("00")); printf("char 01: %d \r\n", chartoint("01")); printf("char 255: %d \r\n", chartoint("255"));
Solution 11
I was having problems converting a char array like
"7c7c7d7d7d7d7c7c7c7d7d7d7d7c7c7c7c7c7c7d7d7c7c7c7c7d7c7d7d7d7c7c2e2e2e" into its actual integer value that would be able to be represented by `7C' as one hexadecimal value. So, after cruising for help I created this, and thought it would be cool to share.
This separates the char string into its right integers, and may be helpful to more people than just me ;)
unsigned int* char2int(char *a, int len) { int i,u; unsigned int *val = malloc(len*sizeof(unsigned long)); for(i=0,u=0;i<len;i++){ if(i%2==0){ if(a[i] <= 57) val[u] = (a[i]-50)<<4; else val[u] = (a[i]-55)<<4; } else{ if(a[i] <= 57) val[u] += (a[i]-50); else val[u] += (a[i]-55); u++; } } return val; }
Hope it helps!
Solution 12
For char or short to int, you just need to assign the value.
char ch = 16; int in = ch;
Same to int64.
long long lo = ch;
All values will be 16.
Solution 13
Use "long long" instead a "int" so it works for bigger numbers. Here the elegant solution.
long long ChardToint(char *arr, size_t len){ int toptenf=1; long long toptenLf=10000000LL; long long makeintf=3000000000000; makeintf= 0LL; int holdNumberf=0; for(int i=len-1;i>=0 ;i--){ switch(arr[i]){ case '0': holdNumberf=0; break; case '1': holdNumberf=1; break; case '2': holdNumberf=2; break; case '3': holdNumberf=3; break; case '4': holdNumberf=4; break; case '5': holdNumberf=5; break; case '6': holdNumberf=6; break; case '7': holdNumberf=7; break; case '8': holdNumberf=8; break; case '9': holdNumberf=9; break; default: holdNumberf=0; } if(toptenf>=10000000){ makeintf=makeintf+holdNumberf*toptenLf; toptenLf=toptenLf*10; }else{ makeintf=makeintf+holdNumberf*toptenf; toptenf=toptenf*10; } } return makeintf; }
Solution 14
int charToint(char a){ char *p = &a; int k = atoi(p); return k; }
You can use this atoi method for converting char to int. For more information, you can refer to this ,. | https://solutionschecker.com/questions/convert-char-to-int-in-c-and-c/ | CC-MAIN-2022-40 | refinedweb | 1,674 | 62.72 |
Property Demotion in Assembler Pipeline Components
You can use property demotion to copy a property value from the message context into the message content or to its header or trailer. You accomplish property demotion by using an XPath expression specified in the document or in the header and trailer schema.
When writing datetime data from the context property into the resulting document, BizTalk Server assumes that all datetime data is in UTC format.
The format used to write properties into the data is determined by the XSD data type as shown in the following table.
Property Demotion and Envelopes
It is often useful to demote values from one or more of the system namespaces -- or one of your own namespaces -- when assembling files within an envelope. Some common scenarios include:
You want to include the original file name submitted to the system in outbound messages so back-end systems can track the origin of data.
You want to write data from the body message to the header. For example, for a purchase order it might be useful to write the ship to name to the envelope for down-stream systems.
You want to combine many different fields into the header without writing custom code. Property demotion in the Xml assembler or flat file assembler can do the job.
It is important to remember that the XML and flat file assembler components both allow you to specify which schema to use for the envelope and document body. You can choose the same schemas used in disassembly or create a new envelope schema with different fields.
For an example of these concepts, see EnvelopeProcessing (BizTalk Server Sample).
See Also
Flat File Assembler Pipeline Component
How to Configure the Flat File Assembler Pipeline Component | https://docs.microsoft.com/en-us/biztalk/core/property-demotion-in-assembler-pipeline-components | CC-MAIN-2020-45 | refinedweb | 291 | 50.57 |
Canceling a Python script in Rhino
This guide demonstrates how to cancel a Python script in Rhino.
Cancelling Scripts
When a script is running, and it is not waiting for user input, it can be cancelled by pressing the ESC key.
Cancelling out of a tight loop. For example, the following script is not be cancelled by pressing the ESC key.
def TightLoopEscapeTest(): for i in range(10000): TightLoopEscapeTest()
To work around this situation, you will want to call back into Rhino inside of your tight loop. Using RhinoScriptSyntax’s
sleep function is a good way to do this without slowing down your code. For example:
import rhinoscriptsyntax as rs def TightLoopEscapeTest(): for i in range(10000): # Do tight loop processing here... rs.Sleep(1) TightLoopEscapeTest()
If your loop is relatively fast, you may want to postpone the Sleep call or else it will slow down your script significantly. For example:
Sub TightLoopEscapeTest
For i = 0 To 100000
' Do tight loop processing here... If ((i Mod 25) = 0) Then Call Rhino.Sleep(0)
End Sub
This will call the Sleep method only once every 25 iterations.
Using OnCancelScript to handle script cancelling, (c) resets the modified parameters. If your script is cancelled in operation (c) that were modified when the script started (a).
The following is a simple example that demonstrates the OnCancelScript procedure.
Sub TightLoopEscapeTest
For i = 0 to 100000
Call Rhino.Print(i) Call Rhino.Sleep(0) the ESC key is pressed. | https://developer.rhino3d.com/5/guides/rhinopython/python-canceling-scripts/ | CC-MAIN-2019-09 | refinedweb | 244 | 74.49 |
This is the gr-qtgui package. It contains various QT-based graphical user interface blocks that add graphical sinks to a GNU Radio flowgraph. The Python namespaces is in gnuradio.qtgui, which would be normally imported as:
See the Doxygen documentation for details about the blocks available in this package. The relevant blocks are listed in the QT Graphical Interfaces group.
A quick listing of the details can be found in Python after importing by using:
There are a number of available QTGUI blocks for different plotting purposes. These include:
The time domain, frequency domain, and waterfall have both a complex and a floating point block. The constellation plot only makes sense with complex inputs. The time raster plots accept bits and floats.
Because the time raster plots are designed to show structure over time in a signal, frame, packet, etc., they never drop samples. This is a
fairly taxing job and performance can be an issue. Since it is expected that this block will work on a frame or packet structure, we tend to be at the lowest possible rate at this point, so that will help. Expect performance issues at high data rates.
Note: There seem to be extra performance issue with the raster plotters in QWT version 5 that were fixed with QWT version 6. As such, the time raster plots have incredibly poor performance with QWT5 to the point of almost being unusable. In the future, we may restrict compilation and installation of these plots only if QWT6 or higher is discovered. For now, just be aware of this limitation.
All QTGUI sinks have interactive capabilities.
Each type of graph has a different set of menu items in the context menu. Most have some way to change the appearance of the lines or surfaces, such as changing the line width color, marker, and transparency. Other common features can set the sampling rate, turn a grid on an off, pause and unpause (stop/start) the display update, and save the current figure. Specific feature are things like setting the number of points to display, setting the FFT size, FFT window, and any FFT averaging.
The time plots have triggering capabilities. Triggering can happen when the signal of a specific channel crosses (positive or negative slope) a certain level threshold. Or triggering can be done off a specific stream tag such that whenever a tag of a given key is found, the scope will trigger.
In the signal level mode, the trigger can be either 'auto' or 'normal' where the latter will only trigger when the even is seen. The 'auto' mode will trigger on the event or every so often even if no trigger is found. The 'free' mode ignores ignores triggering and continuously plots.
By default, the triggers plot the triggering event at the x=0 (i.e., the left-most point in the plot). A delay can be set to delay the signal along the x-axis to observe any signal before the triggering event. The delay feature works the same for both level and tag triggers. The delay is set according to time in seconds, not samples. So the delay can be calculated as the number of samples divided by the sample rate given to the block.
All trigger settings (mode, slope, level, delay, channel, and tag key) are settable in the GRC properties boxes to easily set up a repeatable environment.
A note on the trigger delay setting. This value is limited by the buffer size and/or the number of points being display. It is capped by the minimum of these two values. The buffer size issue is generally only a problem when plotting a large number of samples. However, if the delay is set large to begin with (in the GRC properties box or before top_block.start() is called), then the buffers are resized accordingly offering more freedom. This should be a problem in a limited number of scenarios, but a log INFO level message is produced when asking for the delay outside of the available range.
The QT GUI blocks require the following dependencies.
To use the qtgui interface, a bit of boiler-plate lines must be included. First, the sink is defined, then it must be exposed from C++ into Python using the "sip.wrapinstance" command, and finally, the "show" method is run on the new Python object. This sets up the QT environment to show the widget, but the qApplication must also be launched.
In the "main" function of the code, the qApp is retrieved. Then, after the GNU Radio top block is started (remember that start() is a non-blocking call to launch the main thread of the flowgraph), the qapp's "exec_()" function is called. This function is a blocking call while the GUI is alive.
There are graphical controls in all but the combined plotting tools. In the margins of the GUIs (that is, not on the canvas showing the signal itself), right-clicking the mouse will pull up a drop-down menu that will allow you to change difference parameters of the plots. These include things like the look of the lines (width, color, style, markers, etc.), the ability to start and stop the display, the ability to save to a file, and other plot-specific controls (FFT size for the frequency and waterfall plots, etc.).
All QTGUI sinks can accept and plot messages over their "in" message port. The message types must either be uniform vectors or PDUs. The data type held within the uniform vector or PDU must match the data type of the block itself. For example, a qtgui.time_sink_c will only handle vectors that pass the pmt::is_c32vector test while a qtgui.time_sink_f will only handle vectors that pass the pmt::is_f32vector test.
The sinks must only be used with one type of input model: streaming or messages. You cannot use them both together or unknown behavior will occur.
In the GNU Radio Companion, the QTGUI sink blocks can be set to message mode by changing the Type field. Most of the QTGUI sinks support multiple data types, even for messages, but GRC only displays the message type as the single gray color. Within the block's property box, you can set the type to handle the correct message data type (e.g., 'Complex Message' or 'Float Message'). When using a message type interface, GRC will hide certain parameters that are not usable or settable anymore. For example, when plotting a message in the time sink, the number of points shown in the time sink is determined by the length of the vector in the message. Presetting this in the GUI would have no effect.
The behavior in GRC is for convenience and to try and reduce confusion about properties and settings in the message mode. However, all of the API hooks are still there, so it is possible to set all of this programmatically. The results would be harmless, however.
Here is an example of setting up and using a message passing complex time sink block:
The QTGUI component also includes a number of widgets that can be used to perform live updates of variables through standard QT input widgets. Most of the widgets are implemented directly in Python through PyQT. However, GNU Radio is introducing more widgets, written and therefore available in C++ that also produce messsages. The Python-based widgets only act as variables and so as they are changed, any block using those widgets to set paramters has the callback (i.e., set_value()) function's called.
There is currently a single configuration option in the preferences files to set the rendering engine of the QTGUI sinks. Located in etc/gnuradio/conf.d/gr-qtgui.conf:
[qtgui] style = raster
The available styles are:
We default this setting to raster for the mix of performance and usability. When using QTGUI sinks through an X-forwarding session over SSH, switch to using 'native' for a significant speed boost on the remote end.
We can also set a QT Style Sheet (QSS) file to adjust the look of our plotting tools. Set the 'qss' option of the 'qtgui' section in our configuration file to a QSS file. An example QSS file is distributed with the QTGUI examples found in share/gnuradio/examples/qt-gui/dark.qss. | https://www.gnuradio.org/doc/doxygen/page_qtgui.html | CC-MAIN-2018-26 | refinedweb | 1,386 | 63.9 |
Commit 980ac167 authored by
Committed by
Linus Torvalds
mm/page_ext: support extra space allocation by page_ext user
Until now, if some page_ext users want to use it's own field on page_ext, it should be defined in struct page_ext by hard-coding. It has a problem that wastes memory in following situation. struct page_ext { #ifdef CONFIG_A int a; #endif #ifdef CONFIG_B int b; #endif }; Assume that kernel is built with both CONFIG_A and CONFIG_B. Even if we enable feature A and doesn't enable feature B at runtime, each entry of struct page_ext takes two int rather than one int. It's undesirable result so this patch tries to fix it. To solve above problem, this patch implements to support extra space allocation at runtime. When need() callback returns true, it's extra memory requirement is summed to entry size of page_ext. Also, offset for each user's extra memory space is returned. With this offset, user can use this extra space and there is no need to define needed field on page_ext by hard-coding. This patch only implements an infrastructure. Following patch will use it for page_owner which is only user having it's own fields on page_ext. Link::
Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by:Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked 11 deletions
Please register or sign in to comment | https://gitlab.com/post-factum/pf-kernel/commit/980ac1672e7edaa927557a5186f1967cd45afcf5 | CC-MAIN-2020-05 | refinedweb | 225 | 64.81 |
Interface representing a location where extensions, themes etc are installed. More...
import "nsIExtensionManager.idl";
Interface representing a location where extensions, themes etc are installed..
Removes a file from the stage.
This cleans up the stage if there is nothing else left after the remove operation.
Stages the specified file by copying it to some location from where it can be retrieved later to complete installation.
Whether or not the user can write to the Install Location with the current access privileges.
This is different from restricted because it's not whether or not the location *might* be restricted, it's whether or not it actually *is* restricted right now.
An enumeration of nsIFiles for:
The file system location where items live.
Items can be dropped in at this location. Can be null for Install Locations that don't have a file system presence. Note: This is a clone of the actual location which the caller can modify freely.
The string identifier of this Install Location.
The priority level of this Install Location in loading..
Whether or not this Install Location is on an area of the file system that could be restricted on a restricted-access account, regardless of whether or not the location is restricted with the current user privileges. | http://doxygen.db48x.net/comm-central/html/interfacensIInstallLocation.html | CC-MAIN-2019-09 | refinedweb | 211 | 56.55 |
Value function¶
A value function
is a particular field function that lets invariant the mesh of a field and defined by a function
such that:
Let’s note that the input dimension of
still designs the dimension of
:
. Its output dimension is equal to
.
The creation of the ValueFunction object requires the function
and the integer
: the dimension of the vertices of the mesh
. This data is required for tests on the compatibility of dimension when a composite process is created using the spatial function.
The use case illustrates the creation of a spatial (field) function from the function
such as :
[1]:
from __future__ import print_function import openturns as ot import math as m
[2]:
# Create a mesh N = 100 mesh = ot.RegularGrid(0.0, 1.0, N)
[3]:
# Create the function that acts the values of the mesh g = ot.SymbolicFunction(['x1', 'x2'], ['x1^2', 'x1+x2'])
[4]:
# Create the field function f = ot.ValueFunction(g, mesh)
[5]:
# Evaluate f inF = ot.Normal(2).getSample(N) outF = f(inF) # print input/output at first mesh nodes xy = inF xy.stack(outF) xy[:5]
[5]: | https://openturns.github.io/openturns/1.15/examples/functional_modeling/value_function.html | CC-MAIN-2022-05 | refinedweb | 185 | 54.73 |
Pluggable foundation blocks for building loosely coupled distributed apps.
Includes implementations in Redis, Azure, AWS, RabbitMQ, Kafka and in memory (for development).:
To summarize, if you want pain free development and testing while allowing your app to scale, use Foundatio!
Foundatio can be installed via the NuGet package manager. If you need help, please open an issue or join our Discord chat room. Were always here to help if you have any questions!
This section is for development purposes only! If you are trying to use the Foundatio libraries, please get them from NuGet.
Foundatio.slnVisual Studio solution file.
The sections below contain a small subset of what's possible with Foundatio. We recommend taking a peek at the source code for more information. Please let us know if you have any questions or need assistance!
Caching allows you to store and access data lightning fast, saving you exspensive operations to create or get data. We provide four different cache implementations that derive from the
ICacheClient interface:
MaxItemsproperty. We use this in Exceptionless to only keep the last 250 resolved geoip results..
HybridCacheClientthat uses the
RedisCacheClientas
ICacheClientand the
RedisMessageBusas
IMessageBus.
ICacheClientand a string
scope. The scope is prefixed onto every cache key. This makes it really easy to scope all cache keys and remove them with ease.
using Foundatio.Caching; ICacheClient cache = new InMemoryCacheClient(); await cache.SetAsync("test", 1); var value = await cache.GetAsync<int>("test");
Queues offer First In, First Out (FIFO) message delivery. We provide four different queue implementations that derive from the
IQueue interface:
using Foundatio.Queues; IQueue<SimpleWorkItem> queue = new InMemoryQueue<SimpleWorkItem>(); await queue.EnqueueAsync(new SimpleWorkItem { Data = "Hello" }); var workItem = await queue.DequeueAsync();
Locks ensure a resource is only accessed by one consumer at any given time. We provide two different locking implementations that derive from the
ILockProvider interface:();
Allows you to publish and subscribe to messages flowing through your application. We provide four different message bus implementations that derive from the
IMessageBus interface:
using Foundatio.Messaging; IMessageBus messageBus = new InMemoryMessageBus(); await messageBus.SubscribeAsync<SimpleMessageA>(msg => { // Got message }); await messageBus.PublishAsync(new SimpleMessageA { Data = "Hello" });
Allows you to run a long running process (in process or out of process) without worrying about it being terminated prematurely. We provide three different ways of defining a job, based on your use case:JobBase<T>class. You can then run jobs by calling
RunAsync()on the job or passing it to the
JobRunnerclass. The JobRunner can be used to easily run your jobs as Azure Web Jobs.;
message bus. The job must derive from the
WorkItemHandlerBaseclass. You can then run all shared jobs via
JobRunnerclass. The JobRunner can be used to easily run your jobs as Azure Web Jobs." });
We provide different file storage implementations that derive from the
IFileStorage interface:
We recommend using all of the
IFileStorage implementations as singletons.
using Foundatio.Storage; IFileStorage storage = new InMemoryFileStorage(); await storage.SaveFileAsync("test.txt", "test"); string content = await storage.GetFileContentsAsync("test.txt")
We provide five implementations that derive from the
IMetricsClient interface:
We recommend using all of the
IMetricsClient implementations as singletons.
IMetricsClient metrics = new InMemoryMetricsClient(); metrics.Counter("c1"); metrics.Gauge("g1", 2.534); metrics.Timer("t1", 50788);
We have both slides and a sample application that shows off how to use Foundatio. | https://awesomeopensource.com/project/FoundatioFx/Foundatio.Kafka | CC-MAIN-2022-33 | refinedweb | 544 | 51.04 |
-
Hide horizontal and vertical lines in a JTree
Hi,
suppose we have three JTrees in a Windows L&F where the second one shall not show any vertical or horizontal lines for a node. If this restriction would be true for all three JTrees one could invoke
However only the second one must not show any lines. I tried something like
Does anyone have a clue to solve this problem?
Thx.
suppose we have three JTrees in a Windows L&F where the second one shall not show any vertical or horizontal lines for a node. If this restriction would be true for all three JTrees one could invoke
UIManager.put("Tree.paintLines", Boolean.FALSE).
However only the second one must not show any lines. I tried something like
tree.putClientProperty("Tree.paintLines", Boolean.FALSE); tree.updateUI();But unfortunately this does not work as the lines are still shown. Furthermore we need to set "on/off" the lines dynamically dependend on the user data.
Does anyone have a clue to solve this problem?
Thx.
- -> Does anyone have a clue to solve this problem?
Take a look at the source code to see what that particular property does. Then maybe you will be able to override the UI to provide an on/off switch at a table level somehow.
- set your own UI, overriding these (to do nothing)
paintHorizontalLine(..)
paintVerticalLine(..)
- Try with
tree.putClientProperty("JTree.lineStyle", "None");Bye.
JTree.lineStyleonly works on metal l&f.
To override the UI should work but it is very ugly to set the ui each time the user changed the data. Beside this imagine your code will be excuted on a windows, linux and mac os.
Do I really have to create my own UI for each l&f just to hide the lines for a node? Is there no other way?
- only tested on windows, but this seems simple enough
import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.tree.*; class Testing { boolean showLines = false; public void buildGUI() { JButton btn = new JButton("Show/Hide Lines"); final JTree tree = new JTree(); tree.setUI(new javax.swing.plaf.basic.BasicTreeUI(){ protected void paintHorizontalLine(Graphics g,JComponent c,int y,int left,int right){ if(showLines) super.paintHorizontalLine(g,c,y,left,right); } protected void paintVerticalLine(Graphics g,JComponent c,int x,int top,int bottom){ if(showLines) super.paintVerticalLine(g,c,x,top,bottom); } }); JFrame f = new JFrame(); f.getContentPane().add(new JScrollPane(tree),BorderLayout.CENTER); f.getContentPane().add(btn,BorderLayout.SOUTH); f.setSize(200,200); f.setLocationRelativeTo(null); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.setVisible(true); btn.addActionListener(new ActionListener(){ public void actionPerformed(ActionEvent ae){ showLines = !showLines; tree.repaint(); } }); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable(){ public void run(){ try{UIManager.setLookAndFeel("com.sun.java.swing.plaf.windows.WindowsLookAndFeel");}catch(Exception e){} new Testing().buildGUI(); } }); } }
This discussion has been closed. | https://community.oracle.com/tech/developers/discussion/comment/5774204/ | CC-MAIN-2021-25 | refinedweb | 481 | 50.12 |
Ubigraph is a system for visualizing dynamic graphs. This version is shipped in binary form as a standalone server that responds to requests using XML-RPC. This makes it easy to use from C, C++, Java, Ruby, Python, Perl, and other languages for which XML-RPC implementations are available. Since XML-RPC uses TCP-IP, the server (which visualizes the graph) can be run on a different machine/operating system than the client (which is manipulating the graph). It is also possible to have multiple clients updating the graph simultaneously. (Note that for clients to be on different machines from the server, firewalls must be configured to allow traffic on port 20738.)
After downloading the release:
$ gunzip UbiGraph-....tgz $ tar xvf UbiGraph-....tar $ cd UbiGraph-... $ bin/ubigraph_server & (empty black window) $ cd examples/Python $ ./run_all.sh
If you're familiar with Python, a good place to start is examples/Python/ubigraph_example.py. This example illustrates the higher-level API for ubigraph in Python:
This is alpha software. Please help us (and other users) by reporting problems you encounter. Problems can be emailed to support@ubietylab.net. Please also see the suggestions on the web site about submitting bug reports.
This version of the ubigraph software is shipped in binary form as a standalone server. Clients talk to the server using XML-RPC, a standard remote procedure call protocol that uses HTTP-POST requests to call methods. The method call and return results are encoded with XML. The use of XML-RPC makes it trivial to use Ubigraph with popular scripting languages such as Python and Ruby.
The server process must be started before any clients can connect to it. To do this, just run the ubigraph_server program found in the bin subdirectory of the distribution. You should be rewarded with a message ("Running Ubigraph/XML-RPC server.") and a new window which is empty and black.
You can now run the programs included with the distribution. In developing UbiGraph we were focussed on the layout algorithm, with the result that the GUI is still somewhat primitive. You can rotate the graph by holding the left mouse button and dragging. Dragging with the middle mouse button pans. There is a right-mouse button menu that will let you switch into fullscreen mode. A number of keystrokes are recognized:
You shouldn't need to worry about the XML layer unless you are implementing your own client interface in some language that is not yet supported. However, if you're curious to see what the messages being sent to and from the server look like, set XMLRPC_TRACE_XML=1 in your environment before running ubigraph_server. Here is an example call-response pair, which creates a new edge from vertex 0 to vertex 9, which is given an edge-id 423265977 by the server.
XML-RPC CALL: <?xml version="1.0" encoding="UTF-8"?> <methodCall> <methodName>ubigraph.new_edge</methodName> <params> <param><value><i4>9</i4></value></param> <param><value><i4>0</i4></value></param> </params> </methodCall> XML-RPC RESPONSE: <?xml version="1.0" encoding="UTF-8"?> <methodResponse> <params> <param><value><i4>423265977</i4></value></param> </params> </methodResponse>
The five functions shown below cover the basic operations. API functions are presented in C language syntax, but the way these are adapted to other languages is straightforward.
void ubigraph_clear(); int ubigraph_new_vertex(); int ubigraph_new_edge(int x, int y); int ubigraph_remove_vertex(int x); int ubigraph_remove_edge(int e);
ubigraph_clear resets the graph, deleting any vertices and edges that exist. It's a good idea to call this method at the beginning of any session, in case a previous client failed to clean up.
new_vertex creates a vertex, and returns its vertex-id (an integer). You need to remember this vertex-id to create edges with new_edge, which creates an edge between two vertices (specified by their vertex-ids), and returns its edge-id (an integer). To delete a vertex, call ubigraph_remove_vertex and supply its vertex-id; any edges touching the vertex are removed also. To delete an edge, call ubigraph_remove_edge and supply its edge-id. The remove methods return 0 on success, or -1 on failure (i.e., you tried to remove an edge or vertex that did not exist.)
If you do not want to keep track of vertex or edge-id's, there is an alternate pair of API routines that allow you to specify the vertex-id and edge-id when creating vertices and edges:
int ubigraph_new_vertex_w_id(int id); int ubigraph_new_edge_w_id(int id, int x, int y);
In the xmlrpc subdirectory of the distribution you can find bindings and/or examples of how to use the ubigraph server from various programming languages.
Python and Ruby are the easiest to get working, since XML-RPC is included in the standard libraries for these languages. For Java, you will need to install a .jar package for XMLRPC support. For C and C++ you will need to install the XMLRPC-C and libwww libraries.
XML-RPC is included in the Python standard library. An example usage is shown below:
import xmlrpclib # Create an object to represent our server. server_url = '' server = xmlrpclib.Server(server_url) G = server.ubigraph # Create a graph for i in range(0,10): G.new_vertex_w_id(i) # Make some edges for i in range(0,10): G.new_edge(i, (i+1)%10)
Ubigraph is distributed with a collection of Python examples. You can run them all using the script run_all.sh in the examples/Python subdirectory.
Python provides an easy way to experiment with the API and styles. If you start an interactive Python session and paste in the first few lines above, you can then generate some vertices and play with their styles.
$ python Python 2.3.5 (#1, Apr 25 2007, 00:02:14) Type "help", "copyright", "credits" or "license" for more information. >>> import xmlrpclib >>> server = xmlrpclib.Server('') >>> G = server.ubigraph >>> x = G.new_vertex() >>> y = G.new_vertex() >>> G.new_edge(x,y) 335979033 >>> G.set_vertex_attribute(x, 'color', '#ff0000') 0 >>> G.set_vertex_attribute(y, 'shape', 'torus') 0 >>> G.set_vertex_attribute(y, 'color', '#ffff40') 0 >>> G.set_vertex_attribute(x, 'label', 'This is red') 0
In examples/Python you will find ubigraph.py, which provides a higher-level interface to ubigraph:
XML-RPC is included with Ruby. Here is an example program:
require 'xmlrpc/client' server = XMLRPC::Client.new2("") for id in (0..9) server.call("ubigraph.new_vertex_w_id", id) end for id in (0..9) server.call("ubigraph.new_edge", id, (id+1)%10) end
Motohiro Takayama has written a nicer interface, Rubigraph, which hides the XMLRPC details:
require 'rubigraph' Rubigraph.init # initialize XML-RPC client. v1 = Vertex.new v2 = Vertex.new e12 = Edge.new(v1, v2) v1.color = '#003366' v2.shape = 'sphere' e12.label = 'edge between 1 and 2'
Rubigraph can be found in the subdirectory examples/Ruby/Rubigraph. It is distributed under an MIT license.
$ svn checkout
$ git clone git:://rubyforge.org/rubigraph.git
XML-RPC can be used with Perl via the Frontier::Client, available from CPAN as part of the Frontier-RPC package.
#!/usr/bin/perl use Frontier::Client; my $client = Frontier::Client->new( url => '';); $client->call('ubigraph.clear', 0); my $a = $client->call('ubigraph.new_vertex'); my $b = $client->call('ubigraph.new_vertex'); $client->call('ubigraph.new_edge', $a, $b)
You will need to install Apache XML-RPC for Java, which can be obtained from. The .jar files in the lib subdirectory of the Apache XML-RPC binary distribution should be placed in your usual CLASSPATH.
In the xmlrpc/Java subdirectory of the ubigraph distribution you will find ubigraph.jar, which provides a class org.ubiety.ubigraph.UbigraphClient that hides the xmlrpc details. Javadoc for this class can be found in the xmlrpc/Java/html subdirectory of the distribution. An example use is shown below:
import org.ubiety.ubigraph.UbigraphClient; public class Example { public static void main(String[] args) { UbigraphClient graph = new UbigraphClient(); int N = 10; int[] vertices = new int[N]; for (int i=0; i < N; ++i) vertices[i] = graph.newVertex(); for (int i=0; i < N; ++i) graph.newEdge(vertices[i], vertices[(i+1)%N]); } }
An API is provided for C and C++ that hides the underlying XML-RPC implementation. Once you have this API built, using it is very simple, e.g.:
#include <UbigraphAPI.h> int main(int const argc, const char ** const argv) { int i; for (i=0; i < 10; ++i) ubigraph_new_vertex_w_id(i); for (i=0; i < 10; ++i) ubigraph_new_edge(i, (i+1)%10); sleep(2); ubigraph_clear(); }
The xmlrpc/C subdirectory of the ubigraph distribution contains some things you will need to build. Here is how to proceed:
$ svn checkout xmlrpc-c $ cd xmlrpc-c $ ./configure --disable-libwww-client --enable-curl-client $ make -i $ sudo make install
-lubigraphclient -lxmlrpc_client -lxmlrpc -lxmlrpc_util -lxmlrpc_xmlparse -lxmlrpc_xmltokThe xmlrpc libraries should be installed in one of the standard library paths (e.g., /usr/include/lib). For the linker to find libubigraphclient.a, you will need to either copy this to a standard library path, or include the path with a -L option.
-lwwwapp -lwwwfile -lwwwhttp -lwwwnews -lwwwutils -lwwwcache -lwwwftp -lwwwinit -lwwwstream -lwwwxml -lwwwcore -lwwwgopher -lwwwmime -lwwwtelnet -lwwwzip -lwwwdir -lwwwhtml -lwwwmux -lwwwtrans -lmd5 -lxmlparse -lxmltok
Follow the instructions for C, above. Use extern "C" when including the header file, e.g.:
extern "C" { #include <UbigraphAPI.h> } int main(int const argc, const char ** const argv) { for (int i=0; i < 10; ++i) ubigraph_new_vertex_w_id(i); for (int i=0; i < 10; ++i) ubigraph_new_edge(i, (i+1)%10); sleep(2); ubigraph_clear(); }
Vertex attributes can be set with the following function:
int ubigraph_set_vertex_attribute(int x, string attribute, string value);
However, if you have a large number of vertices with similar attributes, you should use style-ids, as described later.
The following vertex attributes are intended for eventual inclusion in an "Ubigraph Pro" (i.e., not free) version. Please be cautioned that they may disappear from the free version in the future.
Edge attributes can be set with the following function:
int ubigraph_set_edge_attribute(int x, string attribute, string value);
The table below shows available edge attributes.
If you wish to change the style of a large number of vertices in a similar way, you should consider using style-ids. This allows you to predefine a vertex style (e.g., red cubes), and apply it to a large number of vertices.
There are eight functions in the API for managing styles:
int ubigraph_new_vertex_style(int parent_styleid); int ubigraph_new_vertex_style_w_id(int styleid, int parent_styleid); int ubigraph_set_vertex_style_attribute(int styleid, string attribute, string value); int ubigraph_change_vertex_style(int x, int styleid); int ubigraph_new_edge_style(int parent_styleid); int ubigraph_new_edge_style_w_id(int styleid, int parent_styleid); int ubigraph_set_edge_style_attribute(int styleid, string attribute, string value); int ubigraph_change_edge_style(int e, int styleid);
All new vertices begin with a style-id of 0, which is the default vertex style. To change attributes of all the vertices in the graph, you can use ubigraph_set_vertex_style_attribute(0, attribute, value). For example:
# Make all the vertices red. G.set_vertex_style_attribute(0, "color", "#ff0000")
You can create a new vertex style with the function ubigraph_new_vertex_style(parent_styleid), which derives a new style from an existing style. You can always provide 0 for the parent_styleid, which will derive a new style based on the default vertex style. For example:
mystyle = G.new_style(0) G.set_vertex_style_attribute(mystyle, "shape", "cube") mystyle2 = G.new_style(mystyle) G.set_vertex_style_attribute(mystyle2, "size", "0.3")
This creates a new style id, stored in the variable
mystyle,
which is derived from the default vertex style. Another
style, mystyle2, is derived from mystyle.
It might be helpful to think of derived styles in terms
of equations such as:
mystyle = default vertex style + [shape=cube]
mystyle2 = mystyle + [size=0.3]
When you change a style attribute, it affects all vertices with that style, and also all derived styles that have not changed that attribute. In this sense styles are similar to inheritance in object-oriented languages, cascading style sheets, InDesign styles, etc.
If for example we did:
G.set_vertex_style_attribute(0, "size", "1.5")
This would make the size 1.5 for both the default vertex style and mystyle.
The order in which styles are created and attributes set does not matter. That is, when you create a new style, you do not take a snapshot of the style from which it is derived. Changes made to a style continues to affect styles derived from it.
To set the style of a vertex, use the change_vertex_style(vertex-id, style-id) function.
Edge styles work the same way as vertex styles. Setting attributes of edge style 0 will change the default edge attributes. For example, to make spline edges the default:
G.set_vertex_style_attribute(0, "spline", "true")
If you are finding that API calls are slow (e.g., building a graph takes a long time), the problem is probably in the client XMLRPC implementation. Ubigraph can respond to between 105 to 106 API calls per second when called directly (without XMLRPC). When called via XMLRPC in loopback mode using a decent XMLRPC client, it can sustain 1-2 thousand API calls per second. If you are seeing rates substantially lower than this, there is likely a performance problem in your client XMLRPC implementation.
There are some simple changes that can result in drastic improvements (or losses) in performance. The essential points are:
Here are some benchmark results for creating a cube graph (N=1000 vertices). These performance numbers are from an 8-core Mac Pro (2x Quad-Core Intel Xeon, 3 GHz, 8Gb RAM) running Mac OS X 10.4.11 (Darwin 8.11.1).
These performance numbers are for Ubuntu running on the Mac Pro mentioned above using VMWare and 1-2 virtual processors. Your mileage may vary.
Performance bottlenecks can arise in these places:
14682 Python CALL sendto(0x3,0x62e8f4,0x11c,0,0,0) 14682 Python GIO fd 3 wrote 284 bytes "<?xml version='1.0'?> <methodCall> <methodName>ubigraph.set_edge_attribute</methodName> <params> <param> <value><int>2061447965</int></value> </param> <param> <value><string>arrow</string></value> </param> <param> <value><string>True</string></value> </param> </params> </methodCall> " 14682 Python RET sendto 284/0x11c 14682 Python CALL recvfrom(0x3,0xcc694,0x1,0,0,0) 14682 Python GIO fd 3 wrote 1 byte "H" 14682 Python RET recvfrom 1 14682 Python CALL recvfrom(0x3,0xe8294,0x1,0,0,0) 14682 Python GIO fd 3 wrote 1 byte "T" 14682 Python RET recvfrom 1 14682 Python CALL recvfrom(0x3,0xe8934,0x1,0,0,0) 14682 Python GIO fd 3 wrote 1 byte "T" 14682 Python RET recvfrom 1 14682 Python CALL recvfrom(0x3,0xe8974,0x1,0,0,0) 14682 Python GIO fd 3 wrote 1 byte "P" 14682 Python RET recvfrom 1 14682 Python CALL recvfrom(0x3,0xe89d4,0x1,0,0,0) 14682 Python GIO fd 3 wrote 1 byte "/" 14682 Python RET recvfrom 1 14682 Python CALL recvfrom(0x3,0xe8a74,0x1,0,0,0) 14682 Python GIO fd 3 wrote 1 byte "1" | http://ubietylab.net/ubigraph/content/Docs/index.html | CC-MAIN-2015-06 | refinedweb | 2,455 | 57.16 |
/*Simple Serial Communication*/int val;int ledPin = 13;void setup() { Serial.begin(9600); pinMode(ledPin, OUTPUT);}void loop() { while(Serial.available()==0); val = Serial.read()-'0'; Serial.print("The value you entered is: "); Serial.println(val);}
it won't do for me to send 1024 or 180 and it to be received as 1, 0, 2, 4 or 1,8,0.
Send...a packet terminator, like carriage return (which the Serial Monitor can add for you).
use atoi() to convert the array to a number, and use the number.
What does this mean:
What class does that function apply to?
How do I check (from the arduino end) when a carriage return occurs.if(serial.read()==13) ? (As per decimal conversion of ASCII)?
char c = Serial.read(); if(c != '\n') { // Not a carriage return } else { // A carriage return finally got here... }
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=147229.msg1107016 | CC-MAIN-2015-48 | refinedweb | 176 | 61.33 |
Good morning all.
I am running JBossWs 3.0.1 (about 95% sure of the version) on JBoss App Server 4.2.2 on the service side. The JDK over there is 1.6.0_03.
When I send a request to my service using SOAPui, I get back the expected XML in the SOAP message. When I send a request using the client stubs I generated using wsconsume, the outgoing and incoming messages look good, and the proper objects get created by the XML parser (this service returns a custom object), but all fields are populated to their default values (null for strings, 0 for ints).
When I turn Wireshark on, all the data is definitely there to create the objects and populate them. The XML within the envelope appears to be well-formed. I don't get any exceptions until the bad objects percolate through my code. I assume that this is a jaxb problem, but I'm honestly at a loss as to how to fix it. Does anyone have any ideas?
I'll keep looking for answers and will post what I did wrong if I find it. If anyone else has seen this or thinks they have and want more information, please let me know.
And thanks for the help, all who read this.
I am seeing similar issues using the same JBossWS, app server and JDK. I played around with the SOAP annotation elements and got lots of different results. That was when I went home for the weekend. ;-) I will bang on it some more but am also a bit baffled.
Okay, I solved my problem. Sort of. Basically, it was an annotation thing. We're trying to use the same package to evaluate SOAP and REST alternatives for a service layer, and some of the REST stuff was screwing up our namespaces. So now I have that problem to deal with, but that's beyond the scope of this forum.
Thanks for the help. | https://developer.jboss.org/thread/103002 | CC-MAIN-2018-39 | refinedweb | 331 | 83.36 |
creates special futures. These futures wait in its destructor, until the work of the associated promise is done. That is the reason, why the creator has not to take care of its child. But it gets even better. You can execute a std::future as a fire and forget job. The by std::async created future will be executed just in place. Because the std::future fut is in this case not bound to a variable, it's not possible to invoke fut.get() or fut.wait() on the future to get the result of the promise.
Maybe, my last sentences were a bit too confusing. So I'll compare an ordinary future with a fire and forget future. It is necessary for a fire and forget future, that the promise runs in a separate thread to start immediately with its work. This is done by the std::launch::async policy. You can read the details to the launch policy in the post asynchronous function calls.
auto fut= std::async([]{return 2011;});
std::cout << fut.get() << std::endl; /// 2011
std::async(std::launch::async,[]{std::cout << "fire and forget" << std::endl;}); // fire and forget
The fire and forget futures have a big charm. They will run in place and execute there work package, without the creator taking care of them. The simple example shows the described behaviour.
// async.cpp
#include <iostream>
#include <future>
int main() {
std::cout << std::endl;
std::async([](){std::cout << "fire and forget" << std::endl;});
std::cout << "main done " << std::endl;
}
Without further ado, the output.
The praise for the behaviour is high. Too high.
The future, that is created by std::async, waits in its destructor, until its work is done. An other word for waiting is blocking. The future blocks the progress of the program in its destructor. The becomes obvious, in case you use fire and forget futures.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// blocking.cpp
#include <chrono>
#include <future>
#include <iostream>
#include <thread>
int main(){
std::cout << std::endl;
std::async(std::launch::async,[]{
std::this_thread::sleep_for(std::chrono::seconds(2));
std::cout << "first thread" << std::endl;
});
std::async(std::launch::async,[]{
std::this_thread::sleep_for(std::chrono::seconds(1));
std::cout << "second thread" << std::endl;}
);
std::cout << "main thread" << std::endl;
}
The program executes two promises in their own thread. The resulting futures are fire and forget futures. These futures block in their destructor until the associated promise is done. The result is, that the promise will be executed with high probability in that sequence, in which you find them in the source code. That is exactly what you see in the output of the program.
I want to stress this point once more. Although I create in the main-thread two promises, which are executed in separate threads, the threads run in sequence one after the other. That is the reason, why the thread with the more time consuming work package (line 12) finishes first. Wow, that was disappointing. Instead of three threads running concurrently, each thread will be executed after another.
The key issue is, that the by std::async created thread is waiting in its destructor until the associated promise is done, can not be solved. The problem can only be mitigated. In case you bind the future to a variable, the blocking will take place at the time point, when the variable goes out of scope. That is the behaviour, you can observe in the next example.
// notBlocking.cpp
#include <chrono>
#include <future>
#include <iostream>
#include <thread>
int main(){
std::cout << std::endl;
auto first= std::async(std::launch::async,[]{
std::this_thread::sleep_for(std::chrono::seconds(2));
std::cout << "first thread" << std::endl;
});
auto second= std::async(std::launch::async,[]{
std::this_thread::sleep_for(std::chrono::seconds(1));
std::cout << "second thread" << std::endl;}
);
std::cout << "main thread" << std::endl;
}
Now, the output of the program matches our intuition, because the three threads are executed in parallel. The future first (line 12) and second (line 17) are valid until the end of the main-function (line 24). So, the destructor will perhaps blocks at this time point. The result is, that the threads with the smallest work package is the fastest one.
I have to admit, my usage of std::async creates futures very contrived. At first, the futures were not bound to a variable. At second, I didn't use the future to pick up the result from the promise by a get or wait call. Exactly in that situation, we can observe the strange behaviour, that the future blocks in its destructor.
The key reason for these posts was it, to show, that a fire and forget future, that is not bound to a variable, must be handled with great care. But this point doesn't hold for futures, which are created by std::packaged_task or std::promise.
I guess, you know it. I'm not the big fan of condition variables. So I want to compare condition variables with tasks to synchronize threads. Because I believe, tasks are the most times the less error prone and therefore the better choice. So, stay tuned for the next post. (Proofreader Alexey Elymanov)
Go to Leanpub/cpplibrary "What every professional C++ programmer should know about the C++ standard library". Get your e-book. Support my blog.
Name (required)
Website
Notify me of follow-up comments
Read more...
Hunting
Today 609
All 382631
Currently are 169 guests and no members online
Read more... | http://modernescpp.com/index.php/the-special-futures | CC-MAIN-2017-34 | refinedweb | 930 | 65.62 |
On Tue, Mar 29, 2011 at 9:31 PM, Benjamin Peterson
<report@bugs.python.org> wrote:
> 2011/3/29 Darren Dale <report@bugs.python.org>:
>> The benefit of abstractproperty.abstract{...} is that one decorator is required instead of two, right? Are there others?
>
> Mostly it doesn't create a weird asymmetry between a @abstractproperty
> decorated function not needing @abstractmethod but
> @someabstractprop.setter needing it.
Did you read the documentation I provided in the patch? There is no
asymmetry, the documentation and examples provided by previous python
releases are demonstrably inadequate. For example:
class AbstractFoo(metaclass=ABCMeta):
def get_bar(self): ...
def set_bar(self, val): ...
bar = abstractproperty(get_bar, set_bar)
The documentation indicates that a subclass will not be instantiable
until all of its abstract methods and properties are overridden. What
is abstract about the bar property? Was it the getter, setter, or
both, or neither? The answer is neither. A subclass can simply do:
class Foo(AbstractFoo):
bar = property(AbstractFoo.get_bar, AbstractFoo.set_bar)
and it is instantiable. On the other hand, for AbstractFoo to assert
that subclasses must provide concrete implementations of the get_bar
and set_bar methods, it must decorate get_bar and set_bar with
@abstractproperty. This is true for previous releases of python, the
documentation of abstractproperty in previous python releases is
simply incomplete. If a method is abstract, it needs to have an
__isabstractmethod__ attribute that is True, and @abstractmethod
provides the means of setting this attribute.
This patch simply extends abstractproperty so it can respect the
abstractedness of the methods assigned to it. If somebody defines an
ambiguous abstractproperty like my AbstractFoo example, they get the
same result with the patch as they did without: an abstract property
with two concrete methods (this is an unfortunate situation that
cannot be fixed without breaking backwards compatibility).
Therefore, there is no asymmetry between when @abstractmethod is
required and when it is not. If the *method* is abstract and must be
reimplemented by a subclass, @abstractmethod is required. Even for
methods that participate in property definitions, even with
<=python-3.2.
>>.
>
> That's not true. The method could be tagged in @abstractgetter decorator.
I think you misunderstood my point. I agreed with you that it could be
tagged by @abstractgetter. It cannot be tagged by the constructor.
That is where an asymmetry would be introduced between when
@abstractmethod is needed (declare methods abstract before passing
them to the constructor) and when it would not be (passing methods to
abstractgetter which declares them abstract).
(By the way, in review of issue11610.patch, GVR said he thought I had
the right idea and that the backward compatibility goal was satisfied.
Some of these points were covered in that discussion.)
Darren | https://bugs.python.org/msg132567 | CC-MAIN-2021-21 | refinedweb | 445 | 50.43 |
Hi Daniel, This is a fair-sized list of issues ... must have been cooking for a while ? ... Daniel Lezcano wrote: > > > Hi Serge, > > here are a few suggestions for the containers in general and most of > these suggestions are pre-requisites for CR (may be not the higher > priority but just to keep in mind). > > * time virtualization : for absolute timer CR, TCP socket timestamps, ... Good point. > > * inode virtualization : without this you won't be able to migrate some > applications eg. samba which rely on the inode numbers. Hmmm... have you given it a thought ? > > * debugging tools for the containers: at present we are not able to > debug a multi-threaded application from outside of the container. Why not ? does ptrace-ing from parent container not work ? > > * poweroff / reboot from inside the container : at poweroff / reboot, > all the processes are killed expect the init process which will stay > there making the container blocked. Maybe we can send a SIGINFO signal > to the init's parent with some information, so it will be up the parent to: > - ignore the signal > - stop the container (poweroff/halt) > - stop and start again the container (reboot). > >> > > Right. > >> checkpoint/restart needs... checkpoint/restart. > > I know you are working hard on a CR patchset and most of the questions / > suggestions below were already addressed in the mailing list since some > month ago but IMO they were eluded :) If you can talk about these points > and clarify what approach would be preferable that would be nice. > > IMHO the all-in-kernel-monolithic approach raise some problems: Hmmm... anouther round ? :( So, clearly, I couldn't restist :p > > * the tasks are checkpointed from an external process and most of the > kernel code is designed to run as current I think we are already mostly reusing codes, with few exceptions. Can you elaborate where's the problem ? > > * if a checkpoint or a restart fails, how do we debug that ? How > someone in the community using the CR can report an information about > the checkpoint has failed in a particular place ? The same for the > restart. And a much more harder case is if a restart succeeded but a > resource was badly restored making the application to continue its > execution but failing 1 hour later. For checkpoint we have a nice mechanism that adds (a) record(s) to the checkpoint image that describe the error when it occurs. There are a few examples already in the code. We haven't made much progress on the restart front, yet. I'm pretty sure any idea to this end is applicable in either approach. > > * how this can be maintained ? who will port the CR each time a > subsystem design changes ? > > * the current patchset is full kernel but needs an external tool to > create the process tree by digging in the statefile, weird. It uses the head of the data to create the process hierarchy. What's weird about it ? The main advantage is the flexibility it provides. The alternative is to start all tasks in the kernel (a la OpenVZ), or what you suggest, which sounds like .. hmm .. external tool to create the process tree by digging in the statefile :p > * the container and the checkpoint/restart are not clearly > decorrelated, that brings a dangerous heuristic in the kernel, > especially with nested namespace and partial resources checkpoint. IMHO, > the checkpoint / restart should succeed even if the resources are not > isolated, we should not CR some boundaries like the namespaces. That's already possible in the current approach. > > Regarding these points and the comments of Kerrighed and google guys, > maybe it would be interesting to discuss the following design of the CR: > > 1) create a synchronism barrier (not the freezer), where all the tasks > can set the checkpoint or restart status This is already how it works in restart. > > That allows to have a task to abort the checkpoint at any time by ^^^^^^^^^^^ Is this an issue with current approach ? BTW, to be able to checkpoint at _any time_, preemptively, you _must_ be able to checkpoint externally to the tasks. For instance, how would you handle a ptraced task ? STOPed task ? > setting a status error in the synchronism barrier. The initiator of the > checkpoint / restart is blocked on this barrier until the checkpoint / > restart finishes or fails. If the initiator exits, that's cancel the > current operation making possible to do Ctrl+C at checkpoint or restart > time. Aborting using ctrl-c or any other method is already possible now with no harm done. In fact, with less harm than when requiring the cooperation of participating tasks. > > 2) make a vdso which is the entry point of the checkpoint and set this > entry as a signal handler for a new signal SIGCKPT, the same for > SIGRESTART (AFAIR this is defined in posix 1003.m). > > This approach allows to checkpoint from the current context which is > less arch dependant and/or to override the handler with a specific Why is it less arch dependent ? The only arch dependent code in the current patchset is what is defined differently by separate archs (cpus, mm-context). > library making possible to do some work before calling the > sys_checkpoint itself. That will allows to build the CR step by step by > making in userspace a best-effort library to checkpoint/restart what is > not supported in the kernel. This sort of notification is indeed desirable and can be added to either approach. > > 3) a process gains the checkpointable property with a specific flag or > whatever. All the childs inherit this flag. That will allows to identify > all the tasks which are checkpointable without isolating anything and > than opens the door to the checkpoint/restart of a subset of a process tree. Already possible. Isolation is a nice feature, not a requirement (at least if you ask me :) > > 4) dump everything in a core-file-like and improve the interpreter to > recreate the process tree from this file. How is this different from above ? > > Dynamic behaviour would be: > > Checkpoint: > - The initiator of the checkpoint initialize the barrier and send a > signal SIGCKPT to all the checkpointable tasks and these ones will jump > on the handler and block on the barrier. > > - When all these tasks reach this barrier, the initiator of the > checkpoint dumps the system wide resources (memory, sysv ipc, struct > files, etc ...). Note that with namespaces, there are no "system wide resources", but instead there are multiple namespaces with resources. > > - When this is done, the tasks are released and they store their > process wide resources (semundo, file descriptor, etc ...) to a > current->ckpt_restart buffer and then set the status of the operation > and block on the barrier. > > - The initiator of the checkpoint then collects all these informations > and dump them. > > - Finally the initiator of the checkpoint release the tasks. Can you explain why this approach is better than the current one ? Rename "initiator" to "external checkpointer", and all the rest is nearly the same. Only that instead of relying on the freezer code (which is, clearly, reuse of existing code!), your approach requires a delicate mechanism to allow all tasks to cooperate at the initiator's will. > > > Restart: > - The user executes the statefile, that spawns the process tree and all > the processes are blocked in the barrier. Done already. > > - The initiator of the restart restore the system wide resources > and fill the restarted processes' current->ckpt_restart buffer. > > - The initiator sends a SIGRESTART to all the tasks and unblock the tasks > > - all the tasks restore their process wide resources regarding the > current->ckpt_restart buffer. Done already (with the exception that they do it one by one because the checkpoint image is streamed). > > - all the tasks write their status and block on the barrier Done. > > - the initiator of the restart release the tasks which will return to > their execution context when they were checkpointed. Ditto. > > This approach is different of you are doing but I am pretty sure most of > the code is re-usable. I see different advantages of this approach: > > - because the process resources are checkpointed / restarted from > current, it would be easy to reuse some syscalls code (from the kernel > POV) and that would reduce the code duplication and maintenance overhead. Checkpoint and restart are asymmetric: checkpoint needs to _observe_ and record, and restart needs to _create_ and build. That's why reusing existing syscalls is extremely helpful for restart, but not so much for checkpoint. In current approach, restart indeed is done in the current context. And that's where you'd like to reuse syscalls. Checkpoint is done by observing tasks (out of their context), and I believe the code will be pretty much the same for in-context. Being out of context requires little bit glue to guarantee safe access to non-current resources. > > - the approach is more fine grained as we can implement piece by piece > the checkpoint / restart. Can do. Was discussed on containers mailing list some time ago with Kerrighead, IIRC in regarding IPC namespaces. > > - as the statefile is in the elf format, gdb could be used to debug a > statefile as a core file > > - as each process checkpoint / restart themselves, most of the > execution context is stored in the stack which is CR with the memory, so > when returning from the signal handler, the process returns to the right > context. That is less complicated and more generic than externally > checkpoint the execution context of a frozen task which would be > potentially different for the restart. Ehh ? The code is actually straight forward. No kernel stack, and user stack is in memory anyway. Take a look at the code, it's pretty straightforward. > > > I hope Serge you can present this approach as an alternative of the > current patchset __if__ this one is not acceptable. There you go. I could not resist :O Now, before I go hide (...) - some of these points require attention, e.g. - error reporting on restart, notification mechanisms, partial containers and selected resources, etc. Oren. | https://listman.redhat.com/archives/libvir-list/2009-July/msg00047.html | CC-MAIN-2021-21 | refinedweb | 1,653 | 70.23 |
Pure Python 3 wrapper for the Zenodo REST API
Project description
PyZenodo
Pure Python wrapper for Zenodo REST API.
Allows upload / download of data from Zenodo.
Install
pip install pyzenodo3
Latest development
git clone pip install -e pyzenodo3
Usage
Here are several examples of using Zenodo from Python 3. All of them assume you have first:
import pyzenodo3 zen = pyzenodo3.Zenodo()
Upload file to Zenodo
Get a Zenodo
deposit:writeAPI Token. This token must remain private, NOT uploaded to GitHub, etc.!
create a simple text file
mymeta.inicontaining title, author etc. (see the example
meta.iniin this repo)
upload file to Zenodo (myApiToken is the cut-n-pasted Zenodo API text token)
python upload_zenodo.py myApiToken mymeta.ini myfile.zip --use-sandbox
Note the
--use-sandbox is to avoid making junk uploads while testing out.
Once you're sure things are working as intended, not using that flag uploads to "real" Zenodo permanently.
Find Zenodo record by Github repo
Rec = zen.find_record__by_github_repo('scivision/lowtran')
This Zenodo Record contains the metadata that can be further manipulated in a simple class containing the data in dictionaries, with a few future helper methods.
Find Zenodo records by Github username
Recs = zen.search('scivision')
Recs is a
list of Zenodo Records for the GitHub username queried, as in the example above.
Notes
- We don't use
deposit:publishAPI token to keep a human-in-the-loop in case of hacking of sensor nodes.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pyzenodo3/ | CC-MAIN-2020-24 | refinedweb | 262 | 59.3 |
Web tests plug-ins enable you to isolate and reuse code outside the main declarative statements in your Web test. A customized Web test plug-in offers you a way to call some code as the Web test is run. The Web test plug-in is run one time for every test iteration. In addition, if you override the PreRequest or PostRequest methods in the test plug-in, those request plug-ins will run before or after each request, respectively.
You can create a customized Web test plug-in by deriving your own class from the WebTestPlugin base class.
You can use customized Web test plug-ins with the Web tests you have recorded, which enables you to write a minimal amount of code to attain a greater level of control over your Web tests. However, you can also use them with coded Web tests. For more information, see How to: Create a Coded Web Test.
You can also create load test plug-ins. For more information, see
How to: Create a Load Test Plug-In.
Open a test project that contains a Web test.
For more information about how to create a test project, see How to: Create a Test Project.
Create a class library project in which to store your Web test and a Web test plug-in.
Select the class library project and then right-click Add Reference.
On the .NET tab, select Microsoft.VisualStudio.QualityTools.WebTestFramework. Click OK.
In your test project, right-click and select Add Reference.
On the Projects tab, select the new class library. Click OK.
Write the code of your plug-in. First, create a new public class that derives from WebTestPlugin.
Implement code inside one or both of the PreWebTest and M:Microsoft.VisualStudio.TestTools.WebTesting.WebTestPlugin.PostWebTest(System.Object,Microsoft.VisualStudio.TestTools.WebTesting.PostWebTestEventArgs) event handlers.
After you have written the code, build the new project.
Open a Web test.
To add the Web test plug-in, click Set Web Test Plug-in on the toolbar. This displays your test plug-in in the Set Web Test Plug-in dialog box. Select your class and then click OK.
You can also change the Web test plug-in in the Properties window. Select the Web test node and press F4. In the Properties window, you see the Plug-in category and the plug-ins you have added to the Web test.
The following code creates a customized Web test plug-in that adds an item to the WebTestContext that represents the test iteration.++;
}
}
} | http://msdn.microsoft.com/en-us/library/ms243191.aspx | crawl-002 | refinedweb | 422 | 66.94 |
In C# it can be tiresome to do certain image editing functions using GDI+. This post has some fun editing methods which can come in handy at times. I have also included a nice little C# program to show all the functionality of the methods below.
Saving a Jpeg
The first thing to do here is set up the method signature with the input
parameters. These are the save file path (
string), the Image to save
(
System.Drawing.Bitmap), and a quality setting (
long).
private void saveJpeg(string path, Bitamp img, long quality)
The next few things to do are setting up encoder information for saving the file. This includes setting an EncoderParameter for the quality of the Jpeg. The next thing is to get the codec information from your computer for jpegs. I do this by having a function to loop through the available ones on the computer and making sure jpeg is there. The line under that makes sure that the jpeg codec was found on the computer. If not it just returns out of the method.
The last thing to do is save the bitmap using the codec and the encoder infomation.(); // Find the correct image codec for (int i = 0; i < codecs.Length; i++) if (codecs[i].MimeType == mimeType) return codecs[i]; return null; }
Cropping.
private static Image cropImage(Image img, Rectangle cropArea) { Bitmap bmpImage = new Bitmap(img); Bitmap bmpCrop = bmpImage.Clone(cropArea, bmpImage.PixelFormat); return (Image)(bmpCrop); }
Resizing
This next set of code is a slightly longer and more complex. The main reason this code is longer is because this resize function will keep the height and width proportional.
To start with we see that the input parameters are the image to resize
(
System.Drawing.Image) and the size (
System.Drawing.Size). Also in this
set of code are a few variables we use. The first two are the source
height and width which is used later. And there are 3 other variables to
calculate the proportion information.
private static Image resizeImage(Image imgToResize, Size size) { int sourceWidth = imgToResize.Width; int sourceHeight = imgToResize.Height; float nPercent = 0; float nPercentW = 0; float nPercentH = 0; }
The next step is to actually figure out what the size of the resized image should be. The first step is to calculate the percentages of the new size compared to the original. Next we need to decide which percentage is smaller because this is the percent of the original image we will use for both height and width. And now we calculate the number of height and width pixels for the destination image.);
The final thing to do is create the bitmap (
System.Drawing.Bitmap) which
we will draw the resized image on using a Graphics
(
System.Drawing.Graphics) object. I also set the interpolation mode,
which is the algorithm used to resize the image. I prefer
HighQualityBicubic, which from my testing seems to return the highest
quality results. And just to clean up a little I dispose the Graphics
object.
Bitmap b = new Bitmap(destWidth, destHeight); Graphics g = Graphics.FromImage((Image)b); g.InterpolationMode = InterpolationMode.HighQualityBicubic; g.DrawImage(imgToResize, 0, 0, destWidth, destHeight); g.Dispose();
And this gives us the final code.
private static Image resizeImage(Image imgToResize, Size size) { int sourceWidth = imgToResize.Width; int sourceHeight = imgToResize.Height; float nPercent = 0; float nPercentW = 0; float nPercentH = 0;); Bitmap b = new Bitmap(destWidth, destHeight); Graphics g = Graphics.FromImage((Image)b); g.InterpolationMode = InterpolationMode.HighQualityBicubic; g.DrawImage(imgToResize, 0, 0, destWidth, destHeight); g.Dispose(); return (Image)b; }
Here is the source code and a C# VS2005 Express Edition solution with the needed methods and some test code.
Source Files:
thanks a lot :) it is really helpful to me in current project :D thanks again ...
Superb! Helped me back in VS 2005.. but now I am using VS 2010.. is there anything equivalent you have for this version? thanks again!
When converting, how do you preserve the DPI for the saved image?
Thank you!!! Hails from Mexico!
public static class ImageExtensions { public static Image ScaleToFit(this Image helper, double Height, double Width) { double percentWidth = (double)Width / (double)helper.Width; double percentHeight = (double)Height / (double)helper.Height;
double multiplier = 0;
if (percentWidth > percentHeight) multiplier = percentHeight; else multiplier = percentWidth; return helper.GetThumbnailImage((int)(helper.Width * multiplier), (int)(helper.Height * multiplier), null, IntPtr.Zero); }
}
I want it in Web Application Form
when i crop an .jpg image and save it, the program changes the image extention to .png i don't want to change the image extension. is there any solution?
Resize worked perfectly. Thanks
may the good Lord bless you and you entire household...Great Code
Extension Methods are great here:
can u suggest how to select an already drawn line and then resize it.. i m currently developing a cad like application in c#.
nyc code and its relly help:)
Thanks
is there any javascript involved in it?? I want a a croping and resizing like facebook where a client can upload and crop its picture.
Many Thanks chillaxdesigns@gmail.com
Good post. Thanks
To fix the light border around some resized images, you need to set the other 5 quality settings to High on the Graphics object and use an ImageAttributes instance with TileModeXY set for the last parameter of DrawImage.
For a list of the other 29 GDI bugs to avoid, see
For an open-source implementation without these bugs, see the project.
Thanks a lot...Great tutorial !
hi , i want code for display popup box and crop after upload the image to my database.can any body please help me.this is my mail id svmbabu547@gmail.com.
hello, Any buddy can tell me about the how can be do the color separation in the c# means find the separate weights of the blue, red , green color in a single image
Hello, i have an embedded circuit am designing (Home security Burglar alarm),it has webcam connected to it then to my Pc serialport,in event of burglar breaking into my home (by tripping the sensors) the device sends letter like " c " to the Pc serialport which signals my C# application to activates for my webcam (30secs to 2minutes) to take capture images of the burglar and save it on my harddrive with different default names at different intervals this happens once my sensors are tripped on my hardware.
is there any possibilty for implementing complex image editing tools
i am getting low quality, when i resize small size to big size.
I have a little problem... I know how to use C. But I really can't find a good version of C#... Some help please....
Your effort is very much appreciated.
I used the image crop & was good for purpose
Thanks
Check this Article:
Just a note, Bitmaps can be manipulated in streams so if you are having issues writing the file then editing it and trying to overwrite it rather do the resize in a MemoryStream. Works nicely for web apps and reduces the disk IO. The MemoryStream can be created using a byte array for those that didn't know.
Thanks for a well written tutorial. Keep up the effort
Very good job. It wood be better if you copy the code in the 13th comment on how to use this methodes for noobies like me. It took me some time to find out how to use this methodes. Anyway now that I understood the code is very useful. Thank you.
I've just discovered this code - and it works great.
However, I've noticed that there's some sort of memory leak in the code.
I'm building a screensaver, and in testing this I'm loading each image 1 second after the other. During testing I got an out of memory exception and I'm trying to find the cause of that. I've modified the code to
private Image resizeImage(Image imgToResize, Size size) { return imgToResize; }
and the memory leak goes away.
Any ideas??
Peter
Bitmap implements IDisposable. The code in the tutorial should have been calling Dispose on the Bitmap object that's being created in that function.
Thanks for your kindness to share your experience. I had used your method to solve a "resizing" problem, and the program performance wonderfully.
hi, can anybody help me about plate recognition C# algorithm
good job really useful :)
I am using this code to show an error when file is open:
try { stream = file.Open(FileMode.Open, FileAccess.ReadWrite, FileShare.None); } catch (IOException) { MessageBox.Show("file is open!"); } The problem is that code is not working with pictures (jpg, bmp and other) and with folders.
Is there anything that I am missing?
First off, just wanna say great article. I've implemented a different cropping method using unmanaged code that is much faster than Bitmap.Clone. It's about 2x faster for cropping large areas >1000x1000px and 100x faster for small areas \<200x200px.
Raptor00 : This is the best I ever seen on internet. I have tried for a long time to split my jpg files at 60000x10000 pixels, and your routine worked... THANK YOU!
Very helpful article. Many thanks.
Easy to understand and very helpful! Thanks so much!
hello i tried to open the code in microsoft visual studio 2008 but i couldn't open it. i would like to see how the program looks like
hello friends, i have tiff file and in this file has written some data or lines so i just want to copy lines or data and past in notepad or MS word file.. so is it possible or not.. can we copy data in tiff file.. its urgent and reply me ASAP.. please give me code or demo...
Also, how to change the brightness or gamma of the image??
Hi everyone, Could anyone tell me how to get the stream of the new resized file without saving it to some directory??
Hi,
Great code, thanks.
Quick question: After resizing my image I'm left the black sections to top and bottom of the new image. Is there a way to crop these away?
Graham.
this is a very helpful code
thank you very much!!
very nice code thank you
Your tutorial is so cool, that i read it twice! Thanks for saving time because i was going to implement bicubic interpolations by hands!
You're cool guy you write such wonderfull things
Hi great article and was very helpful. Can you explain how can I increase dpi of an image thanx :)
Hello Reddest,
I developed an application that allows the user to move/resize/edit/save graphical controls such as TextBoxes, PictureBoxes.
Right now I am facing a problem which is editing (and so on) Pictures/Text Larger than The screen. I have tried to solve it by allowing my Customized Panel (which contains all of the user-added controls) to autoscroll (canvasPanel.autoscroll = true).
However, because the image is larger than the screen, whenever I save the modifications to a file, only the visible part of it gets saved -drawn and saved-!
I searched all over the internet and nobody seems to know how to solve that problem or even to suggest ideas or whatsoever!
I would be very greatful to you if you could just point out ideas or simply suggest related websites or anything, because I have tried many ways to solve this without any success...
Thank you very much, ZD
Thanks very much
great article, this is working perfectly. thanks for sharing.
does anybody know how to automatically save multiple images? above code works if you have one image..what if you have a few? does image.save have a way of saving images without specifying the exact directory path??
Merci ! The quality is better than when I used GetThumbnailImage.
thanks.........fine job
Thanks a ton! I'm a newbie and this code saved me from looking like a bafoon at my job!
thanks
Hi, would like to know if the code for the crop application shown above is to write at form1.cs or where? I'm a newbie here tried on the crop code but failed. How should I do it? Please help.
It seems when creating a Bitmap in .net 3.5, you have the option to create from an image and resize in one method:
All you have to do is find your new width and height and return the new image with that method.
Great Post! Finally got the required solution! Nice work! Keep it up!
Thanx!
So Many2 Thanx.. this code i looking for.. confuse at first time finally i use this code and work perfect : String file = Server.MapPath("\~/FileUpload/") + FileUpload1.FileName.ToUpper(); Bitmap gambar = new Bitmap(file); System.Drawing.Image i = resizeImage(gambar, new Size(100, 100)); saveJpeg(Server.MapPath("\~/FileUpload/aaa.jpg"), (Bitmap)i, 100);
Good one. Good explanation. I always take pictures from my camera at a bigger size but then resize it to upload to web. I was using the image editing softwares to resize the images, now I can write my own 'batch resizer' (and auto uploader) :)
Thank You, It is Great !
Thanks for another great tutorial!
I tried it just now. And it seems as if part of the code will come in extremely handy in a project I'm working on where size, but not necessarily quality, matters.
Thanks again for providing us with very very very useful code! :)
There's a small error in resizing code. This can cause an off-by-one error on some some input/output combinations. As in, the returned image is one pixel fewer in width or height than what was requested.
It should read:
int destWidth = (int) Math.Round(sourceWidth * nPercent); int destHeight = (int) Math.Round(sourceHeight * nPercent);
How does Math.Round fix that?
My guess is because casting to int takes the floor, and rounding will get you the nearest whole integer.
The one worry I have about that is that we would round up to the next pixel when not needed. Therefore having the 1 pixel issues but just having 1 too many.
I'm using the cropping code, and I'm getting an outOfMemoryException error.
I don't even know where start with this.
Are the X and Y values legit? i.e. is the new Rectangle within the frame image dimensions?
Hi everyone. The issue we’re having is we have a wpf richtextbox, and we’re pasting images into it, but we want to be able to limit the physical data size, not the display size of the images being pasted in. In other words, if a user paste in an image that’s 2MB, we want to automatically shrink it down to a more manageable size. We have to have many users who share richtextbox data. So their text and images gets saved up to a cloud database so other users may download it. Please note that when an image gets pasted in, it’s a System.Windows.Controls.Image object.Any help will be greately appreciated
thanks
BTW, change this: if (nPercentH \< nPercentW)
to this: if (nPercentH > nPercentW)
if you want to superscribe the image to the cliprect. This code inscribes the image to the cliprect.
you all are the best, I have been trying to write an implementation of an image resizing and cropping algorithm, using VS2005. Email me any code at bpgueze@hotmail.com
There is a useful ActiveX control called AccessImage. It does all the work about images for you.
FOR DEVELOPER: binds to database field (if needed), auto resizes big images, can manage pics in external storage, generates previews.
FOR END USER: load from file, paste from clipboard, scan or drag n drop image in 1 click. Crop it right on the form. Undo if something goes wrong.
Watch action video here:
Good code, just one thing: Avoid calling dispose yourself, use a using-block:
it's not 100% necessary, but it's good pratice, and if you felt you should call dispose, you might as well use using.
Thanks! Hundreds of images resized at the click of a button; it's a beautiful thing. This is awesome.
thanks sir u hve done good job.
I save PNG image from RTF file. All possible resizing-cropping-saving with this excellent code leave image in a strange aspect. It is like elements (characters from “symbol font”) are too near between them… Someone can help me?
Thanks for the code, I tried a lot of examples but this one does exactly what I wanted.
Thanks a lot for that code....
Hi,
I have a TIFF image whose background color is Black and the text i.e the foreground color is in white. I want it to be changed to White background and Black foreground. can you please help me on how to achieve this?
Terrific site. I was in the deep woe.but now i found my requiremetns Lot of Thanks for code
terific tutorial i ve gained lot of knoweledge by thius
how am i suppose to do multiple upload of files?? can you guys help me?
many thanks
Thanks a lot. Keep up the good work. This resize method was exactly what I needed. God Bless
smple the best
Simple and beautiful....
Thanks man!
This is cool!!! Thanks mannnnn, that is what I am talkin about!
Love the tutorial. One question, though. You supply a cropping routine, which is wonderful, but one thing that would be of great interest to me is a tutorial about selecting the image to be cropped. There seems to be few good examples of selecting a rectangle on the screen a la Windows Paintbrush or Photoshop. Any ideas where I might find such an example?
Thanks! This is just what I've been looking for.
Oh, the time this could have saved me in the past!
Thank you for this great code. I'm having one odd problem though, and if anyone has any ideas I'd be grateful.
My problem is that after calling resizeImage, my original source file is locked. So when the user modifies the image and tries to save it, I can't overwrite the original image (my goal).
Here's my client usage snippet:
When I bypass the resize method and simply do a File.Copy(src,dst), the source file is not locked. I've spent two days trying to work around this--it's killing me.
I think the issue is related to the "redefinition" of "img". It is allocated in the client code (new Bitmap(source)), then more space is allocated in resizeImage as "b" (thumbnail Image), then "b" is returned and assigned to "img". Is there an orphaned IO stream for the first "img" allocation that holds an open handle to the source file?
This problem is a bit over my head, so any help would be awesome! Thanks.
I'm on ASP.NET, Framework 3.5, IIS6, Win 2k3 server
Update regarding my previous comment: My problems don't appear to have anything to do with imageResize(). I'll keep chugging along in search of a solution.
Regarding quality of the created thumbnail, I have to say it's stunning.
John
I had the same problem with the handle to the source image remaining open and what solved for me was forcing the Graphics instance to be released.
Hope this helps.
Javier
thx2u
great article but i want to know how can i use the code to resize certain image to fixed width (100 for example) and keeping the height proportional?
So cool!
Thank you so much! Well done!
Thanks very much for the code. I integrated it into something I was doing for some staff at work who wanted to be able to re-size images simply. I threw a little c# application together and it works pretty well.
One issue i do have it that the GDI+ component seems to fail if the image is non-colour managed. In other words, I can use it for RGB/8 (8 bit RGB images) but other colour spaces (like RGB/8# - unmanaged) causes a GDI+ error.
Does anyone have any experience in that area?
The downside for me is that my application only works with some jpgs - not all jpgs.
Thanks again for the code. Very cool and useful for people new to it all (like I am)
Cliff
how can i save a rectangle as xml data.
i want to save this rectangle
How come the gif loses its animation if cropped?
Thanks a lot!! I was looking for something like this for my project.
Thanks for this very useful resizeImage method.
Excellent snippet of code. Explained everything clearly and saved me a couple of hours of work. Much appreciated
Awesome! I was trying to figure out how to use the Developer's Image Library (DevIL). All I wanted was to be able to open, resize, crop, and save a picture, so It seemed way overkill to use their library. Your tutorial is exactly what I wanted! Thanks!
Thank you!
Finally! A way to crop and flip my background image!
Great tutorial!
Saved me a few hours and from having to use a Graphics object to do cropping. Cheers muchly
this code is upload the image and i want to resize the image after uploading so pla give me code ans
After the image has been uploaded, it's in the hands of your server code. If your server is written in ASP.NET, you can use the same code that's in this post. If your server is written in another language, you'll have to do some searching on how to resize images with your specific language.
Thanks a lot...Great tutorial !
Thanks for the Post, it really helped. I'm working on a project using WIA, I scanned a documents and need to CROP out 4 different areas.
How can I determine the Rectangle of each area by simply Loading the Scanned Image in a picture box and dragging my mouse over each area like in Microsoft Paint?.
This will really save me from the over head of using WIA to scan and crop for each area.
Another newbie question; When I upload the files to a server, and attempt to open the page, the application file is displayed as xml source code in the browser. I am using apache/fed core 3, Do I need to install some sort of xml parsing module? thanks
Hey, Hows it going, I have a very newby type of question. When I try to run this in Visual studio, I get "Error while trying to run project,unable to start debugging, Binding handle is invalid" thanks
It sounds like there is a problem with your Visual Studio settings. I don't have any experience with that error myself, but here's an MSDN forum that seems to address a similar problem.
Unable to debug: The binding handle is invalid.
Hopefully that helps a little.
That did it, I was able to find the link below from your link. It was due to the Terminal Services being disabled. They say it has been fixed in vs sp1 -more info at the link below if anyone needs it. THANKS AGAIN
Great work, thanks!
Hello, thank you for this tutorial. I tried to try the cropping funtion but it seems like I missed something and it's not working.
Here is the code:
the cropImage in my code is same from the cropImage method here.
Thank you!
Kim, you code looks fine. The problem is I don't know what output you're expecting. Can you be more specific on what exactly isn't working? Is it cropping incorrectly, or not cropping at all? I just ran the crop code on the flower image above and everything seemed to work fine.
here the image format is of jpg.. can i use it fot gif.. by replacing
i didnt run the code..
Ram, yes you can. That's the only thing you have to change in order to make it a gif file. Remember that it will use a default color pallet of 256 colors so quality might be an issue depending on your application.
Reddest, thanks for ur reply.
Thankx.
Finally i found what i'm looking for....
Very helpful tutorial. Good luck.
Thx for tutorial! I was looking for resizing info other than
GetThumbnailImage. Though when I compared my resized pics with those generated from 'gthumb', the quality of mine isn't good enough (and they take less space), even with a quality level of 100. Gthumb uses essentially the same jpeg encoders I would think, but doesn't use C#/.NET. Any suggestions to get yet higher quality thumbs ?
I would be interested in seeing your results. When I run the above code at quality 100, I get very nice looking thumbnails. Here's an example I just ran. It was originally a 512x512 that I ran through the
resizeImagefunction, then ran the output through the
saveJpegfuntion.
As far as I can tell, there is very little (if any) loss of quality. Here were my steps:
If you want to post links to your images (you won't be able to put an image tag in the comment) and the code you're using, we can take a look to try to see where the error might be.
A comparison of the ouput can be found at
I now use the exact methods (resize and save) presented here and:
I should also mention I am using mono instead of .NET since I'm working on a Linux environment. But i don't think that should matter ?
I installed Mono (on my Windows machine), compiled, and ran the code and I got the exact same thumbnail that I got using .NET. Mono isn't a 100% implementation of .NET (although they're very close) and I'm guessing you ran across a bug in their Linux version. I did some research and couldn't find any instances of other people experiencing the same issue. If you have a Windows machine available, try running the code on that and if the results are better, post a bug report to Mono. If there is a bug, they may have some known work-arounds.
For resizing you can also use
Image.GetThumbnailImage.
Image.GetThumbnailImagedoes not use interpolation when resizing, so loss of quality can be a concern. For quick resizing where quality is not important,
GetThumbnailImageis definitely a good alternative.
how to trace the path
Thanks
Thankx.
finally i found what i looked...
Thanks, thanks, thanks
its really nice to see this example. i'm very thankfull to this code.
Nice tutorial! But it would be nice to put up how to resize images without preserving the aspect ratio as well. I've been very bugged up over that. D=
I can answer that right here. To resize the image without preserving the aspect ratio just skip the ratio calculations. Just set
destWidthto
size.Widthand
destHeightto
size.Heightand you're all set.
Like the tutorial! Thanks :)
I want to process the *.img & *.tif format images by using the C# codings. Any body could you assit me?
Thankyou | http://tech.pro/tutorial/620/csharp-tutorial-image-editing-saving-cropping-and-resizing | CC-MAIN-2013-48 | refinedweb | 4,544 | 75.2 |
Creating a Modular Application in Laravel 5.1
Instead of having a giant mammoth of code, having your application divided into small meaningful modules can make the development of a giant site more manageable and enjoyable.
I have just started working upon a new project in Laravel 5.1 that is going to be huge in terms of functionality. Considering the scale of application, the different modules that it was going to have, instead of jumbling every thing up (controllers, models and views etc) in the existing directories that Laravel provides, I decided to implement modules such that each of the modules will have everything, (it’s controllers, models, views, middlewares, any helpers etc) separated. Now there might be several ways to approach this, but here is how I structured it.
config\ module.php ... ... app\ ... ... Modules\ ModuleOne\ Controllers\ Models\ Views\ routes.php ModuleTwo\ Controllers\ Models\ Views\ routes.php ModulesServiceProvider.php ...
You can follow the steps stated below to achieve a similar structure:
Setting up the Structure
Create a file called
module.php inside the
config directory. This file is going to hold the module names that we want to load and other configuration related to the modules. For now, lets keep it simple and just have the module names that we want to load. The file might look like below. (Note that the
User,
Employee are the module names that we want to load. And for every new module that you would want to create, you will have to add the name for it in this
modules array.)
# config/module.php return [ 'modules' => [ 'User', 'Employee', ] ]
Create a directory called
Modules inside the
app directory. This directory is going to have a separate folder for each of the modules. For example, there can be a folder called
User, one called
Employee so on and so forth.
Let’s say that we want to create an
Employee module. In that case, create an
Employee directory at
app\Modules\. And in this new directory create three directories namely
Controllers,
Models and
Views and a file called
routes.php. There is nothing special with the module, I mean the
routes.php file is going to be used exactly how we use the outer
routes.php file, controllers and models will be same as well. The only thing that you will have to take care about is the namespacing. You will have to make sure that you give proper namespaces to each controller/model that you create. In this case, the controllers will be having the namespace of
App\Modules\Employee\Controllers and for any model, it would be
App\Modules\Employee\Models. The final directory structure may look like the following:
app\ Modules\ Employee\ Controllers\ Models\ Views\ routes.php User\ Controllers\ Models\ Views\ routes.php
Please note that you are not bound to have only the above stated directory structure, you are free to structure it however you want (but you have to make sure that you use proper namespacing). Without any doubt, you can add anything related to your module here as well for example form requests, helpers etc.
Creating the Service Provider
Now again, head to the
Modules directory and add a file called
ModulesServiceProvider. What we are going to do is make this Service provider inform Laravel that we are going to use these modules and you have to load each of the module’s
routes and
views from these modules as well. So that when a
route or
view will be looked up, Laravel will look into these folders as well. Below is how the service provider might look like:
<?php namespace App\Modules; /** * ServiceProvider * * The service provider for the modules. After being registered * it will make sure that each of the modules are properly loaded * i.e. with their routes, views etc. * * @author Kamran Ahmed <kamranahmed.se@gmail.com> * @package App\Modules */ class ModulesServiceProvider extends \Illuminate\Support\ServiceProvider { /** * Will make sure that the required modules have been fully loaded * @return void */ public function boot() { // For each of the registered modules, include their routes and Views $modules = config("module.modules"); while (list(,$module) = each($modules)) { // Load the routes for each of the modules if(file_exists(__DIR__.'/'.$module.'/routes.php')) { include __DIR__.'/'.$module.'/routes.php'; } // Load the views if(is_dir(__DIR__.'/'.$module.'/Views')) { $this->loadViewsFrom(__DIR__.'/'.$module.'/Views', $module); } } } public function register() {} }
Now the next thing is registering this service provider with the Laravel. And for that, open up the file
config/app.php and add ‘App\Modules\ModulesServiceProvider’ to the end of the providers array.
#config/app.php 'providers' => [ ... ... App\Modules\ModulesServiceProvider::class, ]
Adding Modules
Everything is setup now. In order to add a new module, all you have to do is create a folder for the module inside the
App\Modules\ directory, place your controllers, models, views and routes in this directory, register this module name in the
config\module.php and your module has been registered with Laravel. Using the controllers and models is the same that is how you use any outer controller or model i.e. by specifying the correct namespace. But for loading views, what you have to do is call a view like:
ModuleName::viewname e.g.
return view('Employee::dummy');
And that sums it up. Do you have any techniques of your own? How do you structure your modules in Laravel? Do not forget to share it with everyone in the comments section below.
Note: Please note that, during the process, if you come across any Class not found exceptions and you haven’t done anything wrong, just run
composer dump-autoload.
Source code can be found through this Github repository | https://kamranahmed.info/blog/2015/12/03/creating-a-modular-application-in-laravel/ | CC-MAIN-2019-04 | refinedweb | 938 | 56.35 |
PdbSublimeTextSupport 0.2
Display source code in Sublime Text 2 while debugging with pdb.
- This module is used to hook up pdb, the python debugger, with Sublime Text 2,
enabling it to display the debugged source code during a pdb session.
After downloading and unpacking the package, you should install the helper module using:
$ python setup.py install
Next you need to hook up pdb with this module by adding the following to your .pdbrc file, which you can create in your home directory if it’s not there already:
from PdbSublimeTextSupport import preloop, precmd pdb.Pdb.preloop = preloop pdb.Pdb.precmd = precmd
Finally, ensure that you have the subl command line tool has been installed as per these instructions.
Afterwards Sublime Text should get started automatically whenever you enter a debug session. The current source line will be displayed simultaneously while stepping through the code.
This module is based on PdbTextMateSupport by Andi Zeidler and others.
- Author: Martin Aspeli
- Keywords: sublimetext: optilude
- DOAP record: PdbSublimeTextSupport-0.2.xml | https://pypi.python.org/pypi/PdbSublimeTextSupport | CC-MAIN-2017-26 | refinedweb | 169 | 56.76 |
in reply to
Surviving 'Illegal division by zero'
use strict;
use warnings;
{
package MyNumber;
use overload '0+' => \&numify,
'/' => \&division;
use Scalar::Util;
my %numbers;
BEGIN {*MyNumber::__ = \&Scalar::Util::refaddr}
sub DESTROY {delete $numbers {__ shift}}
sub new {my $f; bless \$f => shift}
sub set {$numbers {__ $_ [0]} = $_ [1]; $_ [0]}
sub numify {$numbers {__ $_ [0]}}
sub division {my $f = $numbers {__ $_ [0]};
my $s = ref ($_ [1]) =~ /MyNumber/ ? $numbers {__ $_
+ [1]}
: $_ [1];
($f, $s) = ($s, $f) if $_ [2];
$s ? $f / $s : undef}
}
my $fig_1 = MyNumber -> new -> set (get_numeric_value_from_xml (...))
+;
my $fig_2 = MyNumber -> new -> set (get_numeric_value_from_xml (...))
+;
my $growth = 100 * $fig_1 / $fig_2 - 100;
[download]
Abigail
++ Thanks, that's a really interesting answer, though I think a simple function (see above) might be easier for whoever maintains my library to understand at first glance ;). It's also a shame that it isn't possible to directly re-open the method definitions of '/' and '+' for numeric scalars in Perl.
As an aside, does this solution using numeric classes make anyone else pine for them in Perl (Float, Integer, Complex ...)?
Another, this is the way to "directly re-open the method definitions". In fact, it's safer to do it this way than it is to redefine it for the whole program. That kind of "action-at-a-distance" is the source of more maintenance nightmares than anything else.
Totally agree. Before Abigail's post I hadn't really thought of it in OO (obj-oriented and op-overloading) terms. To put what you're saying in, uh, ahem, pseudocode, it's the difference between globally redefining a core method
class Float
alias :old_divide :/
def / (other)
other == 0 ? nil : old_divide(other)
end
end
[download]
and subclassing, which is all good
class MyFloat < Float
def MyFloat.new(from)
from.to_f()
end
def / (other)
other == 0 ? nil : super(other)
end
end
MyFloat.new(4) / 0 # nil
[download]
A last concept to leave you with - if someone were to take your code and wrap it in something else, which is the politer way to handle things?
Sure, it's a library. I was wondering if there was something scoped lexically analogous to:
use warnings;
{
no warnings qw/once/;
$foo = 5 - $bar;
}
{
no warnings qw/uninitialized/;
$foo = 5 - $qux;
}
[download]
Thanks
sub new {my $f; bless \$f => shift; $f -> set(@_) if @_; $f}
[download]
my $fig_1 = MyNumber -> new (get_numeric_value_from_xml (...));
[download]
I only propose it because most constructors also allow for values to be passed in, which keeps to the Principle of Least Surprise. | http://www.perlmonks.org/index.pl?node_id=369010 | CC-MAIN-2015-18 | refinedweb | 422 | 59.03 |
August 31, 2017 | 9 min Read
Three years ago I followed a few data science courses offered by the Johns Hopkins University on Coursera. Today these courses should be available among the ones in the Data Science specialization. All programming assignments were – and still are – in R. At the end of one course we had to create a small web application with Shiny and deploy it on shinyapps. At the time I wasn’t that comfortable in writing Javascript and CSS, so having to worry only about R code was quite a relief. I still have the web app that I wrote.
Some time ago I had the idea of rewriting the entire thing in Python, so I started looking for a Python equivalent of Shiny. I stumbled upon Spyre, Pyxley and Superset). I immediately discarded Superset. It looked amazing, but I wanted something for a very small application, not an enterprise-ready business intelligence tool. Spyre didn’t convince me, and I tried but struggled with Pyxley.
I toyed with the idea of writing the application with a combination of Flask for the logic and routing, Vue.js for the front-end, Webpack for asset bundling and maybe a SASS framework (or toolkit, like Susy) for styling. I knew I would have to invest a considerable amount of time to put everything together, so I left the project on the side for a while.
A few months passed and I discovered a few more packages: Bowtie, Bokeh, Dash. I found out that you can also create an online dashboard with plotly.
According to the documentation, “Dash is simple enough that you can bind a user interface around your Python code in an afternoon”. In fact, for a simple dashboard with a dropdown menu as the input, and a time series as the output, you need less than 50 lines of code.
Dash allows you to create reactive web applications. This means that changes to input UI component/s trigger changes to an output UI component.
The UI components are created with D3.js and WebGL, so they look amazing. And you get all of this without having to write any HTML/JS/CSS. Under the hood Dash converts React components (written in JavaScript) into Python classes that are compatible with the Dash ecosystem.
The getting started is top-notch, so I suggest you to start from there if you want to try Dash out. Here I will briefly describe what I did for my app.
Here are my import statements.
dash_html_components are pure HTML components, and
dash_core_components are the reactive components. You need to use one or more
Input to trigger changes to a single
Output.
import os import arrow import requests import functools import pandas as pd import dash_core_components as dcc import dash_html_components as html import plotly.graph_objs as go import plotly.plotly as py from flask import Flask, json from dash import Dash from dash.dependencies import Input, Output from dotenv import load_dotenv
When the app is running on my computer I enable
debug and load the environment variables from a
.env file (not checked in).
When the app is running on Heroku I disable
debug and use an external Javascript snippet to include Google Analytics. I can’t remeber where I found the
try/except to understand whether the app is on Heroku or not, but I find it very pythonic.
EAFP: easier to ask for forgiveness than permission.
try: # the app is on Heroku os.environ['DYNO'] debug = False # google analytics with my tracking ID external_js.append('') except KeyError: debug = True dotenv_path = os.path.join(os.path.dirname(__file__), '.env') load_dotenv(dotenv_path)
The world map I am displaying requires a plotly API key and a Mapbox API access token.
py.sign_in(os.environ['PLOTLY_USERNAME'], os.environ['PLOTLY_API_KEY']) mapbox_access_token = os.environ.get('MAPBOX_ACCESS_TOKEN', 'mapbox-token')
Here is how I initialize my Dash app. I create a Flask app first because I want to use a secret key. I don’t think you can set a secret key directly when you instantiate the
Dash class.
app_name = 'Dash Earthquakes' server = Flask(app_name) server.secret_key = os.environ.get('SECRET_KEY', 'default-secret-key') app = Dash(name=app_name, server=server)
I get the latest 4.5+ magnitude earthquakes from the USGS website with a basic, synchronous
GET request.
Next time I will try to make an asynchronous request with asyncio or one of the following libraries: grequests, asks, curio-http, requests-futures.
usgs = '' geoJsonFeed = 'feed/v1.0/summary/4.5_month.geojson' url = '{}{}'.format(usgs, geoJsonFeed) req = requests.get(url) data = json.loads(req.text)
Choosing the right colors for a visualization is surprisingly hard, so I use ColorBrewer.
# colorscale_magnitude = [ [0, '#ffffb2'], [0.25, '#fecc5c'], [0.5, '#fd8d3c'], [0.75, '#f03b20'], [1, '#bd0026'], ] # colorscale_depth = [ [0, '#f0f0f0'], [0.5, '#bdbdbd'], [0.1, '#636363'], ]
Finally, some Dash code. Every Dash app requires a
layout. The python code you write here will be converted in HTML components. I use a few functions to create portions of the dashboard. This way the layout is a bit cleaner and easier to modify.
app.layout = html.Div( children=[ create_header(app_name), html.Div( children=[ html.Div(create_dropdowns(), className='row'), html.Div(create_content(), className='row'), html.Div(create_description(), className='row'), html.Div(create_table(dataframe), className='row'), ], ), # html.Hr(), create_footer(), ], className='container', style={'font-family': theme['font-family']} )
Here are a couple of functions that are responsible for a portion of the UI. If you want you can check the complete code on GitHub.
create_dropdown creates two dash core components. They have to be dash core components, and not simple HTML elements, because each dropdown is an
Input for the
Graph object (also a dash core component).
def create_dropdowns(): drop1 = dcc.Dropdown( options=[ {'label': 'Light', 'value': 'light'}, {'label': 'Dark', 'value': 'dark'}, {'label': 'Satellite', 'value': 'satellite'}, { 'label': 'Custom', 'value': 'mapbox://styles/jackdbd/cj6nva4oi14542rqr3djx1liz' } ], value='dark', id='dropdown-map-style', className='three columns offset-by-one' ) drop2 = dcc.Dropdown( options=[ {'label': 'World', 'value': 'world'}, {'label': 'Europe', 'value': 'europe'}, {'label': 'North America', 'value': 'north_america'}, {'label': 'South America', 'value': 'south_america'}, {'label': 'Africa', 'value': 'africa'}, {'label': 'Asia', 'value': 'asia'}, {'label': 'Oceania', 'value': 'oceania'}, ], value='world', id='dropdown-region', className='three columns offset-by-four' ) return [drop1, drop2]
create_content creates a
DIV with an empty figure inside and return it. The figure will be updated when
_update_graph is triggered (see below).
def create_content(): graph = dcc.Graph(id='graph-geo') content = html.Div(graph, id='content') return content
Now that you have inputs – the two dropdowns – and an output – the Graph – you can define the reactive callback
_update_graph.
The way an
Input object and an
Output object are created is with the dash core component
id attribute. I really like the way the relationship between inputs and output must be declared. It’s very explicit: the
value attribute of a
Dropdown component triggers a change in the
figure attribute of the
Graph component.
_update_graph is rather long because every
Figure needs a
layout and some
data. I have to define a bunch of parameters for the
layout and two overlaid
Scattermapbox for the
data.
I use the underscore in front of this function to suggest that it should not be called. In fact, only changes to the dropdown values should trigger its execution.
@app.callback( output=Output('graph-geo', 'figure'), inputs=[Input('dropdown-map-style', 'value'), Input('dropdown-region', 'value')]) def _update_graph(map_style, region): dff = dataframe radius_multiplier = {'inner': 1.5, 'outer': 3} layout = go.Layout( title=metadata['title'], autosize=True, hovermode='closest', height=750, font=dict(family=theme['font-family']), margin=go.Margin(l=0, r=0, t=45, b=10), mapbox=dict( accesstoken=mapbox_access_token, bearing=0, center=dict( lat=regions[region]['lat'], lon=regions[region]['lon'], ), pitch=0, zoom=regions[region]['zoom'], style=map_style, ), ) data = go.Data([ # outer circles represent magnitude go.Scattermapbox( lat=dff['Latitude'], lon=dff['Longitude'], mode='markers', marker=go.Marker( size=dff['Magnitude'] * radius_multiplier['outer'], colorscale=colorscale_magnitude, color=dff['Magnitude'], opacity=1, ), text=dff['Text'], # hoverinfo='text', showlegend=False, ), # inner circles represent depth go.Scattermapbox( lat=dff['Latitude'], lon=dff['Longitude'], mode='markers', marker=go.Marker( size=dff['Magnitude'] * radius_multiplier['inner'], colorscale=colorscale_depth, color=dff['Depth'], opacity=1, ), # hovering behavior is already handled by outer circles hoverinfo='skip', showlegend=False ), ]) figure = go.Figure(data=data, layout=layout) return figure
As I said at the beginning, you can create Dash apps without having to write any Javascript or CSS. The problem is that even for a very small app like this one, you will probably want to change the styling, add a small script, or maybe just include Google Analytics.
For example, in this app I have to display roughly 300-500 earthquakes in a table, and I use a jQuery plugin to have a nice-looking table with pagination and search functionality. I also added Font Awesome, some styling from the Dash Team and a Google font.
external_js = [ # jQuery, DataTables, script to initialize DataTables '', '//cdn.datatables.net/1.10.15/js/jquery.dataTables.min.js', # small hack for DataTables '', ] external_css = [ # dash stylesheet '', '', '//maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css', '//cdn.datatables.net/1.10.15/css/jquery.dataTables.min.css', ] for js in external_js: app.scripts.append_script({'external_url': js}) for css in external_css: app.css.append_css({'external_url': css})
I had a lot of fun in creating this app, and I’m sure there are many use-cases where a quick (reactive) web app is useful. I will keep using Dash for future projects. I also want to write my own component to practice React.js a bit.
I’m still a bit skeptic about the idea of creating complex layouts in Python though. Even for a small app like this, the layout seems a bit too cumbersome. Applications with a lot of styling might not be ideal as well.
That being said, if you want to build something relatively simple in a day or two, I think Dash is great!
You can find the code for the entire application on GitHub | https://www.giacomodebidda.com/visualize-earthquakes-with-plotly-dash/ | CC-MAIN-2019-09 | refinedweb | 1,664 | 50.23 |
We pay for user submitted tutorials and articles that we publish. Anyone can send in a contributionLearn.
The correct answer to challenge #2:
As our first commentator (who answered correctly), Mike Agar said:
This is a Big “O” problem that screams for a LUT (Look Up Table). Don’t spin on each pixel, create your 256 entry look up table of all possible reversals, and do a quick pass over the 1000×1000 array by simply indexing into the table with the original value as the lookup.
Those guys provided the correct answer in their blogs:
Trung Dinh solution :.
Adam B answer :
Igor Ostrovsky answer:
Those guys provided the correct answer by leaving a comment:
Anton Irinev who provided an implementation, Heiko Hatzfeld, Bill Krueger, Niki, Kevin I, Chris Miller, Will (didn’t mention LUT but its very close), Compuboy, Eugene Efimochkin, Adrian Aisemberg, Tim, leppie and bartek szabat.
Roberto Orsini wrote a really great implementation of how to parallelise the problem here – , using threads from the threadPool, but he didn’t use LUT to reverse the bits. Although this problem is perfectly parallelizable, you can’t rely on the fact that there is more than one CPU (unless stated it in the question). All of the others who provided bitwise operations: even if your solution is correct (XOR with 0xFF is wrong for instance) it is not the most efficient one, you should assume that your code should work on every machine and every processor and you can’t rely on specific hardware where a specific operation may be very efficient.
Edward Shen gave a very detailed answer by mail ,which is currently for the Dev102 eyes only (unlike comments and blog post which are public). We (Dev102 team) realized that we made a mistake because we introduced mail as one of the options to send answers. So, I would like to emphasize that:
This weeks question:
Your input is mXn numeric matrix which is made up from sorted rows and sorted columns. Take a look at the following example, notice that all rows are sorted and all columns are sorted. What is the most efficient way to find an item in this matrix and what is the complexity of the solution.
Take your time…, do you know the answer?
Accept the challenge and provide your solution.
Tags :answerchallengecorrecteficienthi techimagesjob interviewpixelproblemprogrammingquestionsolution
Breeze : Designed by Amit Raz and Nitzan Kupererd
Daniel
Said on May 12, 2008 :
Solution in O(n logn):
This looks like a strong candidate for binary search. However, we can limit the number of searches needed by checking the max and min of the rows.
//go the shortest way
if (numrows > numcols){
for(i=0; i target || mat[numrows-1][i] < target) continue;
do binary search on current column;
} else{
for(i=0; i target || mat[i][numcols-1] < target) continue;
do binary search on current column;
}
}
Abdullah
Said on May 12, 2008 :
My solution would be preparing a hash table whose keys are unique items of this matrix and values are lists of points that keeps the items’ positions in the matrix. Hash entries would look like below :
KEY VALUES
[5] (1,3), (2,2)
[8] (1,5), (2,4), (3,2), (4,1)
…
Each time you seek an item in the hash table you find the matrix bins in which the item stays.
If you do not allow us to define another data holder class, most speedy way would be binary searching the item in the rows from up to down.
Each time you search a row, you keep the nearest bigger item’s row index at hand, and your upper limit for the next row to binary search becomes this index. This way the number of items you have to look at each row decreases by you go down.
N. Terviewee
Said on May 12, 2008 :
If you presented this problem in an interview and made me an offer, I’d turn you down.
If the object of creating the matrix in the first place is efficient storage, you’ve already failed by storing copies of the same item (5, 9, 11, etc.) in multiple cells. That creates ambiguity in that you now have no idea whether the 9 you’ve found is the “right” one. In fact, the sample you provide has four paths to find the number 9. Two reach the same cell along different paths which have the same length; the other two reach two different cells on same-length paths.
That aside, I do know the solution to the problem.
— N.T.
Amit
Said on May 12, 2008 :
Hi
First of all let me say that This kind of attitude won’t you get many job offers…
Second, if you don’t like it you don’t have to answer it. And Third, according to what you wrote i doubt you have the correct answer.
Amit
N. Terviewee
Said on May 12, 2008 :
Ah, ye of little faith…
That kind of attitude toward criticism won’t get you too many hires, at least not if you’re looking for top-quality people. I’ve had interviewers tell me post-offer that my criticism of their problems contributed to their desire to hire me. Having close to 30 years of writing software under my belt does have its advantages.
Anyway, since you’ve asked me to put my money where my mouth is, I will:
I’ll go on the assumption that finding any instance of a value is considered a correct outcome.
1. Begin in the upper-left corner.
2. If the value in the cell equals the search value, return success.
3. If there are neither cells to the right or below, return failure (not found).
4. Examine the cells to the right and below.
Treating a missing cell’s value as the smallest possible value for your numeric type (or, better yet, NaN if that’s supported):
5. If neither cell’s value is less than or equal to the search value, return failure (not found).
5. Move to the cell which has a greater value but is not greater than the search value.
6. Go back to step 2.
You get bonus points if you can tell me why step 2 occurs where it does instead of after step 5.
–N.T.
John S.
Said on May 12, 2008 :
How are we to handle the situation of multiple entries? Which is the correct one?
N. Terviewee
Said on May 12, 2008 :
Corrections:
The last two steps should be numbered 6 and 7.
My final comment should read “You get bonus points if you can tell me why step 2 occurs where it
does instead of after step 6.”
–N.T.
Zeus
Said on May 12, 2008 :
I’m taking the question to imply that I’m asked to find value x’s position(s) within the matrix, hope I’ve not failed at the first hurdle of question comprehension!
Given they’re sorted, I think I’d probably go with something that uses the 0th & Mth column, and the 0th and Nth row to see if the value you’re looking for is bound within those limits…
Then only iterating through the whole row/column if the value is between the two.
Must be noted though, I’m a fairly new programmer and am not really familiar with the efficiencies of matrices!
(In fact this is probably a rather crude method. I look forward to seeing the answers.)
Amit
Said on May 12, 2008 :
In case of multiple entries, finding any one of them will be OK.
to make it simpler, you can assume that if a number appears, it appears only once.
Amit
beefarino
Said on May 12, 2008 :
What constitutes “finding an item” in the matrix? Determining if the matrix contains the number at all, listing {m,n} for each occurrence, etc?
Please elaborate.
Shahar Y
Said on May 12, 2008 :
Hi beefarino,
Finding an item means giving its position in the matrix. In case of multiple entries, provide the position of the first occurrence you found. As Amit wrote in his comment, You can assume that if a number appears, it appears only once (to make it simpler).
Ian Suttle
Said on May 12, 2008 :
Can you provide a sample question for locating an item?
Kimmen
Said on May 12, 2008 :
Maybe the matrix holds ununique items?
Shahar Y
Said on May 12, 2008 :
Ian Suttle –
Sampe question: is 19 exist in the matrix? if it does provide its location.
Is this what you ment?
Kimmen-
As alredy written in previous comments: if ununique items exist, finding one of them is fine. The given matrix is just an example, you can draw your own matrix (sorted rows, sorted columns) which contain only unique items.
Zeus
Said on May 12, 2008 :
Apologies if you get this comment twice, I made one earlier but it seems to have disappeared!
I would do something along the lines of using the 0th and Mth row and 0th and Nth column values to determine whether the required value was within that row/column, then only iterate through the row if the required value was within that bound…
Given the comments, the iteration would stop at the first instance of the value being found.
I’ve not been programming long, so this is likely a crude solution
I look forward to seeing other more technical answers!
commenter
Said on May 12, 2008 :
Outline of solution (I think?!) using recursive search. I have no idea if this is the fastest search. I would _guess_ that the efficiency is O(logN) or something like that?:
Given a search value n and grid g, you could do this:
Work out the coordinates of the cell in the ‘centre’ of g (rounding down, when the width or height is an even number, say). Call this coordinate c
If grid[c] == n:
return c.
If grid[c] n:
{
Search the grid with bottomright corner being the cell above C, and topleft being the topleft of g. If it’s found, return the coord.
Search the grid with topright corner being the cell to the left of C, and bottomleft being the bottomleft of g. If it’s found, return the coord.
}
When a sub-grid is zero dimension, obviously don’t search it.
When the grid is 1×1, don’t search the non-existent sub-grids.
Rob L
Said on May 12, 2008 :
Seems pretty simple for a small matrix (maybe I’m missing something), but here’s my first glance psuedo code anyway;
For each row in matrix (first to last):
If last column value numberToFind And NOT first column Then skip to next row
Else If value > numberToFind And IS first column Then give up (number isn’t in the matrix)
David
Said on May 12, 2008 :
An immediate solution is the following:
WLOG, assume that the rows are longer than the columns. Then, iterate through the rows, and do a binary search on each of them for the number. This should take O(numrows * log(numcols)).
As for a better solution, I’m playing around with crawling down the diagonal entries (by diagonal, I mean that, if the matrix were rectangular, we just start at (0,0) and increment both indices by 1) [binarily] to find the first entry that’s greater than or equal to our number. If the indices for this entry are (a,b), then we know that the rectangular regions with diagonal vertices at (0,0)-(a-1,b-2) and (a,b)-(numcols, numrows) do not contain our number. We can then recurse on the two remaining rectangular regions. The problem with this is, I’m not sure that this solution is necessarily better than the first one.
A third solution- and this one’s actually efficient!
Call the number that we’re looking for “a”.
(if we find “a” at any point in this, terminate)
Start from the bottom left corner. Walk up the column until we get to the smallest number greater than “a”. Let’s say that this entry is of index (0, s). Then, move to the next column, at entry (1, s), and repeat the procedure.
Because the entry at (1,s) is greater than the entry at (0, s), we have the invariant that every time we switch columns, the entry that we are at is greater than a. We end when we reach a border or the rectangle, and perform a binary search on the remaining list of numbers.
This way of traversing the matrix is just a path in which we can only go up or to the right. So, the path length is O(numrows+numcols), which is also the running time of this procedure.
Kimmen
Said on May 12, 2008 :
Shahar Y:
hehe.. I know. I didn’t refresh the page before posting. I posted after I had a talk with my colleague about other stuff, which was a bit foolish of me.
David
Said on May 12, 2008 :
Sorry- I forgot to mention that if in the first column, the last entry is less than “a”, we just walk right to the next column. So we just walk right and up until we find a.
Catweazle
Said on May 12, 2008 :
Start at the top-left, working down diagonally to the right. If the target number is less than the number in the current cell but greater than the number in the previous cell, then search up the current column until reaching a number less than the target; similarly search back along the current row. When reaching the “bottom” of the diagonal line (bottom row or rightmost column), just search the remaining columns or rows respectively.
O(sqrt(mXn))
JT
Said on May 12, 2008 :
I could be way off the mark here with regards to runtime complexity, as I don’t have a lot of experience computing it, but here is my solution (in PHP) which runs in (i think) O(max(m,n)).
<?php
$data = array(
array(1,4,7,8,9,11),
array(2,5,8,10,11,12),
array(5,6,9,12,14,15),
array(7,8,12,15,17,20),
array(8,9,17,18,19,22) );
function check($x1,$y1, $x2, $y2, $v) {
global $data, $i;
$i++;
$xx1 = $x1 + (bool)($data[$x1][$y2] $v);
$yy1 = $y1 + (bool)($data[$x2][$y1] $v);
if ($xx2 < $xx1 || $yy2 < $yy1 )
return “Number not found ($i iterations)”;
if ($x2 == $xx2 && $y1 == $yy1 )
return “Found $v at ($x2,$y1) ($i iterations)”;
if ($x1 == $xx1 && $y2 == $yy2 )
return “Found $v at ($x1,$y2) ($i iterations)”;
return check($xx1,$yy1,$xx2,$yy2,$v);
}
function locate($v) {
global $data, $i;
$i = 0;
echo check(0,0,count($data)-1,count($data[0])-1,$v).”;
}
for ($j=0;$j
Daniel Gary
Said on May 12, 2008 :
Because the end of each column/row represents the maximum value for that column/row, and the first represents the minimum value for that column/row, you can create a bounding box to limit your search area.
Solution:
findPosition(int p)
{
minX = 0;
minY = 0;
maxX = 0;
maxY = 0;
for(i=0;i<m;i++)
{
if(matrix[i,0] <= p maxX)
maxX = i;
}
else
if(maxX)
i=m;
}
for(i=0;i<n;i++)
{
if(matrix[0,i] <= p maxY)
maxY = i;
}
else
{
if(maxY)
i=n;
}
}
for(x=minX;x<maxX;x++)
{
for(y=minY;y<maxY;y++)
{
if(matrix[x,y] == p)
return {x,y};
}
}
return false;
}
Niki
Said on May 12, 2008 :
Just a little nitpicking on the solution to the previous problem (reversing bytes): I think your answer is not optimal, at least not for every machine. I’ve just implemented a version using SIMD/shift operations, and it’s almost twice as fast as the LUT version (LUT: 0.8 ms, SIMD: 0.5 ms). However, I don’t think there’s any way of knowing this without actually implementing both versions. Results might even be different on different machines. All in all, not a good interview question, unless you give the interviewee a few hours time to find the answer.
If anyone told me in an interview “X is the optimal solution”, I probably wouldn’t hire him to implement performance-critical code: He can’t know if X really is optimal, and if he thinks it is, he probably wouldn’t bother to properly profile his code later on, because he “knows” it’s optimal.
alvins
Said on May 13, 2008 :
If you are trying to find y, Loop across coords (x,x) where x = y. If val(x,x) = y, you have found it. Otherwise check coords (z,x) and (x,z) where z < x for match. Continue loop above.
Tristan
Said on May 13, 2008 :
Hi,
assuming we stop at correct comparison.
1)start in top right corner
2)compare target with current,
3)compare target with down 1, compare with left 1.
4)if target greater than down move down,else move left.
5)goto 3)
this solution has a running time of O(m+n)
by taking advantage of the fact that to the right and downwards are numerically greater than the current value we can quickly partition the matrix into three segments. Greater(right and down), Lesser (left and up), and the relevant portion(left and down). Since we start in top right corner the fourth partition (right and up) is empty to start and is not useful in our discussion as it is filled with discarded entries.
If the target is greater numerically then we can eliminate the target row, so we move down the column. if the target is numerically less than we can eliminate the current column so we focus on the row.
Anton Irinev
Said on May 13, 2008 :
on each step we may exclude either bottom row or right column (similarly, we exclude top row or left column), so the complexity is O(n + m)
int i1 = 0, i2 = matrix.GetUpperBound(0); // bottom and top rows
int j1 = 0, j2 = matrix.GetUpperBound(1); // left and right columns
string answer = null;
while (i1 <= i2 && j1 <= j2) {
if (matrix[i1, j1] == requaredValue) answer = i1 + “, ” + j1; else
if (matrix[i1, j2] == requaredValue) answer = i1 + “, ” + j2; else
if (matrix[i2, j1] == requaredValue) answer = i2 + “, ” + j1; else
if (matrix[i2, j2] == requaredValue) answer = i2 + “, ” + j2;
if (answer != null) {
System.Console.WriteLine(answer);
break;
}
if (matrix[i2, j1] requaredValue) j2–; else i1++;
}
Adam
Said on May 13, 2008 :
My solution would be something like this:
X – the number we search for
1. Check witch matrix dimension is smaller. The smaller dimension will be called number of rows from now on.
2. Perform a binary search in the first column, to find an index of the largest number = X. If it is equal to X then the solution is found, otherwise – there is also no X in the matrix. The index found is b.
4. Perform a binary search for X in every row between a and b (inclusive-inclusive).
Kevin
Said on May 13, 2008 :
I think something like this would be optimal (pseudocode):
find_value(array a, int value)
{
if a[0][0] > value || a[a.length-1][a.width-1] a.width
a1 = find_value(a[0..(a.length/2)][0..(a.width], value)
a2 = find_value(a[(a.length/2+1)..(a.length)][0..(a.width)], value)
if a1 != NULL
return {a1[0], a1[1]};
else if a2 != NULL
return {a2[0]+(a.length/2), a2[1]};
else
return NULL;
else
a1 = find_value(a[0..(a.length)][0..(a.width/2)], value)
a2 = find_value(a[0..(a.length)][(a.width/2+1)..(a.width)], value)
if a1 != NULL
return {a1[0], a1[1]};
else if a2 != NULL
return {a2[0]+(a.length/2), a2[1]};
else
return NULL;
}
should be O(lg n) because in every 2 cuts, 1 will be pruned by tree
Kimmen
Said on May 13, 2008 :
The solution:
I won’t provide any code, because my solution isn’t very pretty coded ;P. First off, I created a method, FindValue(value, region) which returns duple. The region argument is a region in the matrix. The FindValue is called recursively.
FindValue works by using BinarySearch on each edge (row/column) of the given region. Here I used .net’s BinarySearch which return a negative value if not found. The negative value can be used to get the index of the closest largest value of the value we wanted to find. If the value was not found on any of the edges, you can create a new region to search in using the results from the BinarySearches.
The complexity of a binary search is O(log n), so each call to FindValue would be 2*O(log N) + 2*O(log M) => O(log (N*M)). As the each call to FindValue divides the search area (the region gets smaller), which means O(log n), I would say the complexity is something like O(log log(n*m)).
Asim
Said on May 13, 2008 :
@Niki:
Oddly enough I’ve started reading a textbook on algorithms
. This page will be of interest to you:
By benchmarking your answer on a particular architecture/system, in one sense you are clearly correct. However, interview questions such as these are posed from a computer science perspective. Referring to the link above, after making an assumption that the ‘RAM Model of Computation’ is valid, then the LUT is a clear winner over any arithmetic approach.
Remember Dijkstra’s words: ‘computers are to computer science as telescopes are to astronomy’.
Edward Shen
Said on May 13, 2008 :
Assumptions: All entries unique, standard desktop computer used, and probably some more I am forgetting to mention atm.
Two solutions depending on how often the values change.
First solution is when the matrix doesn’t change much. A lookup table could be created, making the solution O(log(mn)). This of course takes time, so its not practical if the table constantly changes in a way that breaks the lookup table.
If I am restricted to search operations only (if complexity includes time to build look up table), I can come up with a very rudimentary algorithm which finds iterates through the m or n (whichever is shorter), and binary searches each row/column until an answer is found. This gives a complexity of m log n (where m is the shorter of the two; also log is base 2).
While trying to optimize, I came up with the following algorithm:
Do a binary search for the number on all diagonal matrix indices’ values ([0,0] to [m,m]).
If you hit the number you’re done, if not, u effectively eliminate all numbers above and to the left of the number smaller than it along the diagonal, and all numbers below and to the right of the bottom. Say the number we are looking for is 6, then the above operation eliminates roughly half of the entries as follows:
O = (open to inspection)
X = (eliminated)
X X X X O O O O
X X X X O O O O
X X X X O O O O
X X X 5 O O O O
O O O O 7 X X X
O O O O X X X X
O O O O X X X X
O O O O X X X X
This operation’s complexity is log m since it is binary search.
Now you’re left with half as many candidate entries. The worst case scenario has a m/2 diagonal for both matrices.
We then perform the same operation on those two halves.
Each of these two operations have a complexity of log(m/2) worst case. Continue splitting until the number is found or compare the bottom left and upper right values if we end up in a 2×2 matrix. When n is longer than m (assume m is always the sorter side), we would need to add log(n) as well to binary search extra entries.
Because the worst case scenario big O notation is hard to write out with this site’s char set, I’m just going to roughly calculate the general complexity of this algorithm to be (log m)^2 + log(n). This estimate will slowly move away from the real complexity as m grows large… I’ll update later if it is required…
OJ
Said on May 13, 2008 :
Niki, why did you post that comment here instead of where it belongs?
This is an interesting little problem. I shall have a dabble at a solution when I get home from work.
Marcos Silva Pereira
Said on May 14, 2008 :
A quite simple to solve.
1. Go to the last column at the first row;
2. If the value is greater than the searched number, go left, if it is smaller, go down;
3. Do it until you could walk in the matrix or find the searched element.
Kind Regards
Eugene Efimochkin
Said on May 14, 2008 :
Okay, time to go to lunch, so here’s quick:
Let’s name the item we search with Z.
We have both rows and columns sorted. Hence we use a binary search to find the row where the first element is less or equal to Z, and the last one is greater or equal to Z. Then we use a binary search on that row to find the position of Z exactly.
Isn’t that fast enough? Can’t tell anything about complexity right now, so I definitely failed this interview.
Sol_HSA
Said on May 14, 2008 :
I agree, this is not a very good interview question.
However.
If the grid is small enough, I’d probably just brute-force it. For sufficiently large grids the answer gets more complicated..
If the grid is accessed only rarely, we get to some kind of nifty 2-dimensional binary search which might be interesting to work out; if there’s several accesses to the same data, I’d probably pre-process it to a linear list which is faster to seek.
benishor
Said on May 14, 2008 :
That’s how I’d do it :
trzn
Said on May 14, 2008 :
Asim, why should we assume running code on a scientific non-existing imaginary computer?
Niki is right. On a typical modern environment (including desktop and mobile devices) LUT is not the fastest solution.
Shahar Y
Said on May 14, 2008 :
Niki
I profiled those two options and got different results – LUT is better. Can you please ellaborate more about your tested code and how did you get those numbers? How did you implement the LUT? Did you measure only the first access to the LUT (the first access is really slow, but all the others are done from the cache)? How did you implement the SIMD/shift operations?
Jonathan Gilbert
Said on May 14, 2008 :
// This solution is O(m lg n), where n is the longer dimension.
#define DEBUG
using System;
class Challenge3
{
static void Main()
{
int[][] matrix = new int[][]
{
new int[] { 1, 4, 7, 8, 9, 11 },
new int[] { 2, 5, 8, 10, 11, 12 },
new int[] { 5, 6, 9, 12, 14, 15 },
new int[] { 7, 8, 12, 15, 17, 20 },
new int[] { 8, 9, 17, 18, 19, 22 },
};
for (int y=0; y < matrix.Length; y++)
{
for (int x=0; x rows);
int linear_dimension = transpose ? rows : cols;
int logarithmic_dimension = transpose ? cols : rows;
for (int linear = 0; linear < linear_dimension; linear++)
{
int index = 0;
int count = logarithmic_dimension;
if (get(matrix, count – 1, linear, transpose) value)
{
#if DEBUG
Console.WriteLine(“Bailing at {0} {1}”, transpose ? “row” : “column”, linear);
#endif
return false;
}
#if DEBUG
Console.WriteLine(“Doing binary search of {0} {1}”, transpose ? “row” : “column”, linear);
#endif
while (count > 0)
{
int middle = index + count / 2;
#if DEBUG
Console.WriteLine(” First: {0} Middle: {1} Last: {2} Value: {3} {4}”, index, middle, index + count – 1, get(matrix, middle, linear, transpose), value);
#endif
int comparison = value – get(matrix, middle, linear, transpose);
if (comparison == 0)
{
#if DEBUG
Console.WriteLine(“Found it!”);
#endif
x = transpose ? middle : linear;
y = transpose ? linear : middle;
return true;
}
if (comparison > 0)
{
count = (count + 1) / 2 – 1;
index = middle + 1;
}
else
count /= 2;
}
}
return false;
}
static int get(int[][] matrix, int row, int col, bool transpose)
{
if (transpose)
return matrix[col][row];
else
return matrix[row][col];
}
}
Jonathan Gilbert
Said on May 14, 2008 :
Instead of posting a fix for the blog mangling of my previous post, I’ve simply put the code up at the URL in the “Website” field of this post.
bizonul
Said on May 14, 2008 :
The solution is an extension of binary search:
private Point GetPosition(int value, int[,] matrix, int topLeftRow, int topLeftCol, int bottomRightRow, int bottomRightCol)
{
Point retPos;
if (topLeftCol > bottomRightCol || topLeftRow > bottomRightRow)return new Point(-1, -1);
if (topLeftCol == bottomRightCol && topLeftRow == bottomRightRow)
{
if (matrix[topLeftRow, topLeftCol] == value) return new Point(topLeftRow, topLeftCol);
else
{
return new Point(-1, -1);
}
}
else
{
retPos =
GetPosition(value, matrix, topLeftRow, topLeftCol, (topLeftRow + bottomRightRow)/2,
(topLeftCol + bottomRightCol)/2);
if (retPos.X > -1 && retPos.Y > -1) return retPos;
retPos =
GetPosition(value, matrix, (topLeftRow + bottomRightRow)/2 + 1, topLeftCol, bottomRightRow,
(topLeftCol + bottomRightCol)/2);
if (retPos.X > -1 && retPos.Y > -1) return retPos;
retPos =
GetPosition(value, matrix, topLeftRow, (topLeftCol + bottomRightCol)/2 + 1,
(topLeftRow + bottomRightRow)/2,
bottomRightCol);
if (retPos.X > -1 && retPos.Y > -1) return retPos;
retPos =
GetPosition(value, matrix, (topLeftRow + bottomRightRow)/2 + 1, (topLeftCol + bottomRightCol)/2 + 1,
bottomRightRow, bottomRightCol);
return retPos;
}
}
leppie
Said on May 14, 2008 :
Off the top of my head:
Choose the smaller of n and m for linear search, then the other can be done doing a binary search.
So given m is smaller, complexity will be O(m log n).
Trung Dinh
Said on May 14, 2008 :
A few approaches come to mind:
A. Do binary search of the rows. O(m log n)
B. Do binary search of the columns. O(n log m)
C. If (m < n) do A else do B. In cases where n <> m, this would be faster than either A or B.
D. If you do a lot of lookup of the same matrix, it may
be worthwhile to transform the matrix into a linear
arrays of m x n values and indices.
values[m x n], x[m x n], y[m x n]
This could be done using merge sort O(N log N) where
N = m x n. Subsequent searchs would be O(log N)
using binary search.
drax
Said on May 14, 2008 :
No code from me – but I would guess some sort of recursive traversal algorithm – first you need to determine if its col or row precendence (either should be just as optimal over a large search set)
find the col entry and then the corrsponding row entry (ir vice versa) – as i said no code tho – I’m too old for that kind of thinking now
Matt Howells
Said on May 14, 2008 :
Use a two-dimensional binary search.
Look at the central element of the matrix. If its equal to the object you are searching for, output the indices (i,j). If it is greater than the element you are searching for, split the matrix into two matrices; one containing all elements of the matrix less than the element in the i-dimension, and the other containing all elements less than the element in the j-direction and greater than or equal to the element in the y-direction. If the element is less than the object you are searching for, split the matrix into the two equivalent higher matrices.
Recurse (or use an equivalent loop).
Asim
Said on May 14, 2008 :
@trzn
Without starting a flame thread or becoming a troll, let me just say that any solution is based upon assumptions. My point was that, when it comes to interview questions, one should realise that the interviewer is usually coming from a CS angle and hence doesn’t really care about computers; algorithms are their pride and joy.
Is this “realistic”? Is this the “best” approach? Is this “optimal”? It depends on your circumstances. Assumptions allow one to compromise between ease of understanding/implementation and efficiency.
You answered your own question by prefixing your argument with “On a typical modern environment”, and hence you made an assumption, which is inevitable.
Hope I didn’t come across as facetious. Cheers.
Adam B.
Said on May 15, 2008 :
Best-case scenario, two binary searches will find the element. Worst-case scenario will take four. My answer’s at
Jonas Christensen
Said on May 20, 2008 :
There are a qicker way to reverse the bits than using a look up table.
This solution only need 3 operations. I cant take full credits
Rich Schroeppel came up with this in 1972.
unsigned char pixel;
pixel = (pixel * 0x0202020202ULL & 0x010884422010ULL) % 0x03FFUL;
Regards
Jonas Christensen
Shahar Y
Said on May 20, 2008 :
Hi Jonas Christensen,
Your solution is good but works best on 64bit machine (won’t be as efficient in 32bit machines)-
Besides, how is it faster than reading from the cache?
Jonas Christensen
Said on May 20, 2008 :
You are right it wont be faster, guess I was abit to fast posting.
But its a nice trick by Rich.
Vivek Kanala
Said on May 20, 2008 :
Did any one tried searching diagonally across the matrix?
benishor
Said on May 20, 2008 :
I did here :
However, it’s not the optimal solution although it may sometimes provide faster answers depending on the dataset.
Ates Goral
Said on July 21, 2008 :
If speed is a concern, don’t restrict the look-up table to a mere 256 bytes. Using a 16-bit look-up table will only take up 2^17 = 128K bytes. That’s nothing when you consider the amount of RAM a typical computer has in the early 21st century
| http://www.dev102.com/2008/05/12/a-programming-job-interview-challenge-3/ | CC-MAIN-2014-52 | refinedweb | 5,665 | 68.5 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
#include <boost/phoenix/scope/let.hpp>
You declare local variables using the syntax:
let(local-declarations) [ let-body ]
let allows 1..N local variable
declarations (where N ==
BOOST_PHOENIX_LOCAL_LIMIT).
Each declaration follows the form:
local-id = lambda-expression
Example:
let(_a = 123, _b = 456) [ _a + _b ]
Reference Preservation
The type of the local variable assumes the type of the lambda- expression. Type deduction is reference preserving. For example:
let(_a = arg1, _b = 456)
_a assumes the type of
arg1: a reference to an
argument, while
_b has
type
int.
Consider this:
int i = 1; let(_a = arg1) [ cout << --_a << ' ' ] (i); cout << i << endl;
the output of above is : 0 0
While with this:
int i = 1; let(_a = val(arg1)) [ cout << --_a << ' ' ] (i); cout << i << endl;
the output is : 0 1
Reference preservation is necessary because we need to have L-value access
to outer lambda-scopes (especially the arguments).
args
and
refs are L-values.
vals are R-values.
The scope and lifetimes of the local variables is limited within the let-body.
let blocks can be nested.
A local variable may hide an outer local variable. For example:
let(_x = _1, _y = _2) [ // _x here is an int: 1 let(_x = _3) // hides the outer _x [ cout << _x << _y // prints "Hello, World" ] ](1," World","Hello,");
The actual values of the parameters _1, _2 and _3 are supplied from the
bracketed list at the end of the
let.
There is currently a limitation that the inner
let
cannot be supplied with a constant e.g.
let(_x = 1). .*/ ]
However, if an outer let scope is available, this will be searched. Since the scope of the RHS of a local-declaration is the outer scope enclosing the let, the RHS of a local-declaration can refer to a local variable of an outer scope:
let(_a = 1) [ let( _a = _1 , _b = _a // Ok. _a refers to the outer _a ) [ /*. body .*/ ] ](1) | https://www.boost.org/doc/libs/1_71_0/libs/phoenix/doc/html/phoenix/modules/scope/let.html | CC-MAIN-2020-50 | refinedweb | 343 | 62.17 |
Top: Streams: outfile
#include <pstreams.h> class outfile: outstm { outfile( [ const string& filename, bool append = false ] ); string get/set_filename(string); bool get/set_append(bool); int get/set_umode(int); }
This class derives all public methods and properties from iobase and outstm, and in addition defines the following:
outfile::outfile( [ const string& filename, bool append = false ] ) creates an output file stream, but does not open the file. When opening a file with open(), it is truncated to zero unless append property is set to true. Filename and append parameters are optional.
string outfile::get/set_filename(string) sets the file name. set_filename() closes the stream prior to assigning the new value.
bool outfile::get/set_append(bool) -- if set to true, the file pointer is set beyond the last byte of the file when opening the stream with open().
int outfile::get/set_umode(int) sets UNIX file mode when creating a new file. By default a file is created with 0644 octal, which on UNIX means read/write access for the owner and read-only access for group members and all others. This property has no effect on Windows.
See also: iobase, outstm, logfile, Examples | http://www.melikyan.com/ptypes/doc/streams.outfile.html | crawl-001 | refinedweb | 191 | 55.64 |
- What Is a Web Service?
- Crystal Ball Readings
- The ABCs of Web Services
- How to Use a Web Service
- Summary
How to Use a Web Service
Now that we have introduced Web services, let's play around and use one. To make life easy, let's go about this in a step-by-step process. These steps will be broken up between two sections.
"Using SOAP"The first example is all about getting SOAP installed on our machines. We will set up a SOAP server to work alongside the Tomcat JSP Web server. We're using the SOAP server component of the Apache SOAP 2.2 Java API. Once we have this server up, we will build a very simple Web service to run on the local machine.
"Roaming the Internet"The second example will show how to call up a Web service that isn't on the local machine.
Using SOAP
SOAP is really the core of Web services. In these examples, HTTP is the communication tier and Apache SOAP is the application handling the messages. We will use SOAP in two ways: firstly as a SOAP server to listen and process SOAP requests and secondly as a SOAP client to make requests. While many projects will only need to have a SOAP client, we will use both a client and a server to illustrate the full workings of SOAP.
Step 1: Installing a SOAP Server and Client
We need access to a Java implementation of SOAP. For this book, we will use the Apache SOAP API version 2.2. This software is open source and can be freely downloaded from. In this book we are using a nightly build of the SOAP application. The nightly builds can be found at. The nightly build has a few security features that are not found in the main 2.2 release. We recommend using either a recent nightly build or SOAP 2.3 when it is released in 2002.
Apache SOAP requires the following tools:
An XML ParserWe are using Xerces, which is currently included with the Tomcat server.
JAF (JavaBeans Activation Framework)Simply defined, JAF is a framework for handling the transfer and use of data. Since SOAP is all about moving data back and forth, it makes logical sense that Apache SOAP would use JAF to simplify the internal coding. Sun defines JAF as "a set of standard services to determine the type of an arbitrary piece of data, encapsulate access to it, discover the operations available on it, and to instantiate the appropriate bean to perform said operation(s)." Fortunately, we really don't have to know anything about how to use JAF ourselves as it's used behind the scenes by Apache SOAP.
JavaMailJavaMail is a high-level Java API used to simplify the coding of e-mail applications. JavaMail is used by Apache SOAP to enable SMTP as a transport mechanism for SOAP messages.
JAF can be found at.
To install JAF, all that you need to do is place activation.jar within your system's classpath so the Java Virtual Machine can find the JAF classes.
The JavaMail APIs can be found at.
To install JavaMail, place the mail.jar within your system's classpath.
Once we have these tools, we are ready to install Apache SOAP. To use SOAP, we need to set up the SOAP client. This is required to permit our JSP container to reach out and talk to SOAP servers. To do this, Tomcat will need access to the soap.jar file that comes along with Apache SOAP. Place the soap.jar file into your classpath. Besides writing the code for the client request, this is the only step required for installing a SOAP client.
For this book, we placed the activation.jar, mail.jar, and soap.jar all in the Tomcat lib directory.
We also need a SOAP server to be up and running to process Web service requests. Within the Apache SOAP installation, you will find a soap.war file. Place this WAR file into your Tomcat webapps directory. This WAR contains a simple but complete Web application that functions as a SOAP server.
In the interest of conserving space, we won't repeat the general installation and testing examples that come with Apache SOAP. However, instead of installing SOAP at the root classpath level, we are having SOAP work through Tomcat. This will work for the examples in this book since we are going to use Tomcat and JSP for all the SOAP access (this is a JSP and XML book, after all). This means that all we really need to do is place the soap.jar, activation.jar, and mail.jar files into the Tomcat lib directory and install the soap.war file in order to have Apache SOAP run through Tomcat. One disadvantage of doing this, of course, is that you will not be able to run the SOAP examples from the Java command line. However, this will make setting up your classpath simple.
Once these files are in place, stop Tomcat and then restart it. Tomcat will install the SOAP server for you.
Let's write a quick test file, shown in Listing 3.4. (Save this file as webapps\xmlbook\ chapter3\ShowClients.jsp.)
Listing 3.4 ShowClients.jsp
<%@page contentType="text/html" import="java.net.*, org.apache.soap.server.*" %> <html> <head><title>List Current Clients</title></head> <body> Listing Current Clients on Local Server:<br/> <% URL l_url = new URL (""); ServiceManagerClient l_soap_client = new ServiceManagerClient(l_url); String l_test[] = l_soap_client.list(); for (int i=0; i < l_test.length; i++) { out.print(l_test[i] + "<br/>"); } %> </body> </html>
This file replaces the command-line client test that is in the Apache SOAP documentation. That example validates the services that are running on the server. It works by creating a ServiceManagerClient object with which we can query a SOAP server. In our case, we are using this object to query the status of our local SOAP server. In this example, it queries the URL.
If you have just installed the SOAP server, this page will only return an empty listing. After all, we haven't installed any Web services yet. We need to build a Web service so we can have something to test.
Step 2: Building a Simple Service
You will be amazed at how simple this will be. The first thing we need to do is create a JavaBean. After all, from our viewpoint a Web service is just a Java object (JavaBean) with a fancy front end (Apache SOAP server). This means we will create the Web service under the SOAP Web application we installed in the previous step. In our case, the JavaBean will look like Listing 3.5. (Save this file as webapps\soap\WEB-INF\classes\xmlbook\chapter3\firstservice.java.)
Listing 3.5 firstservice.java
package xmlbook.chapter3; import java.beans.*; public class firstservice extends Object implements java.io.Serializable { public firstservice() {} public String testService () { return("First Test Service");} }
The Web service is called firstservice and it has one method called testService. The only thing this service does is return a string. Not very exciting, but we intentionally kept it simple so we can test everything quickly. Let's compile the Java file and move on to creating the SOAP deployment descriptor. (After compiling, stop and restart Tomcat so the firstservice.class file is registered within the SOAP server classpath.)
Once we have a Java object to use as a service, we must register the object with the SOAP server. Apache SOAP server uses an XML file called DeploymentDescriptor.xml to track the information of a Web service. The SOAP deployment descriptor is just an initialization file containing the basic service information. Let's go ahead and create this file. It turns out that the Apache SOAP server has a tool that permits us to type in the information and Apache SOAP automatically creates the deployment descriptor file. Point your browser to.
Select the Deploy service option. This will bring up an empty data entry screen. For our example, we can fill it in as shown in Figure 3.2.
Figure 3.2 Deploying a Web service on Apache SOAP server.
Note that this screen extends on for a bit, but for this example, we only need to enter the information shown in the screenshot.
Click the Deploy button at the bottom of the data entry frame (not to be confused with the Deploy button on the sidebar!) when you are done.
To prove that everything is working so far, let's run the first test file, ShowClients.jsp. As shown in Figure 3.3, this will demonstrate that our new service is indeed up and running and that it's available for access from outside the SOAP server.
Figure 3.3 Running ShowClients.jsp.
Step 3: Using a Service
Now that we have a service, the next trick is to show how to invoke the Web service. We are going to write a client to access the service. To write this client we need to do the following:
Gather up the information about the Web service in question. We need the name, parameters, and various other details about the service to track it down.
Invoke the service.
Extract any response sent back from the service.
For our current Web service, the detailed information we need to know is listed here:
The Target Service URI (urn:xmlbook.chapter3)
The method to invoke (testService)
Any parameters to pass in to the method (there aren't any, since our first service doesn't have any parameters)
The URL of the SOAP server in question (for us it's)
Now back in our XML book Web site it's time to add in the client JSP page shown in Listing 3.6. (Save this file as webapps\xmlbook\chapter3\RunFirstService.jsp.)
Listing 3.6 RunFirstService.jsp
<%@page contentType="text/html" import="java.net.*, org.apache.soap.*, org.apache.soap.rpc.*" %> <% String ls_result = ""; Call call = new Call (); call.setTargetObjectURI("urn:xmlbook.chapter3"); call.setMethodName ("testService"); call.setEncodingStyleURI(Constants.NS_URI_SOAP_ENC); URL url = new URL (""); Response resp = call.invoke (url, ""); if (resp.generatedFault()) { Fault fault=resp.getFault(); ls_result = " Fault code: " + fault.getFaultCode(); ls_result = " Fault Description: " +fault.getFaultString(); } else { Parameter result = resp.getReturnValue(); ls_result = (String) result.getValue(); } %> <html><head><title>Running a Local Web Service</title></head> <body> The result of the Web service call is <br/> <%= ls_result %> </body> </html>
In this example, the JSP page is acting as the client. When it is executed, it can successfully invoke firstservice, the service we created in a previous example. The result will look like Figure 3.4.
Figure 3.4 Running RunFirstService.jsp.
Now it's time to figure out what is happening. To do this, let's review the important sections of the RunFirstService.jsp example.
In the first section, the code of interest is the import statement. The classes of java.net.* are required for access to the URL object. The org.apache.soap classes were imported to access the Apache SOAP client APIs.
<%@page contentType="text/html" import="java.net.*, org.apache.soap.*, org.apache.soap.rpc.*" %> <% String ls_result = "";
Next, we use the Call object within Apache SOAP to access a Web service. The Call object represents the actual RPC call that is occurring. I personally think of it as my SOAP client because it's the object that is used to call the Web service. In fact, the Call object is the representation of the message to be sent to a Web service:
Call call = new Call ();
Once we have the Call object we need to initialize it with the Web service data (the identification data we supplied when creating the Web service on our SOAP server). The interesting thing to note here is that we are using Constants.NS_URI_SOAP_ENC to tell Apache SOAP to use the standard SOAP encoding. Most of the time, this will be the encoding value you will need to use. The URL object is storing the address of the SOAP server to access for the service:
call.setTargetObjectURI("urn:xmlbook.chapter3"); call.setMethodName ("testService"); call.setEncodingStyleURI(Constants.NS_URI_SOAP_ENC); URL url = new URL ("");
Once we have created our message (the Call object), the next step is to send the message. This is also known as invoking the service and we use the invoke method. The invoke method is a client-side only call to send a message to a service. The results of the call are placed in a Response object. The Response object represents the message sent back from an RPC call:
Response resp = call.invoke (url, "");
Once we have the Response object, we need to check the return message to see what has happened. First, we check for any errors. If something went wrong, we will query the response for the details of the error:
if (resp.generatedFault()) { Fault fault=resp.getFault(); ls_result = " Fault code: " + fault.getFaultCode(); ls_result = " Fault Description: " +fault.getFaultString(); }
If everything is fine, we read the return value embedded within the message. In our case, we know that the Web service is only returning a String object, so we immediately type the return object to a String. In some cases, the logic to parse out the return value would be more robust to handle a complicated object or multiple values:
else { Parameter result = resp.getReturnValue(); ls_result = (String) result.getValue(); }
The rest of the page is just an HTML document to display the results.
The code to access the Web service is relatively simple. Half the battle is getting the information of the Web service to call. The other half Apache SOAP takes care of for us in the sending and receiving of the message. All we are doing is creating a message object, receiving a message object, and querying the results.
Roaming the Internet
This section concentrates on using a remote Web service; we will build an example to access a publicly available Web service from the XMethods Web site. For a Web service, this example will use the TemperatureService service shown in Listing 3.3.
Choosing a Web Service
This example needs the parameters to call the service and that means it's time to go back to Listing 3.3 and the WSDL file. From this file, it is possible to get the information we need to access the Web service. The pieces of data we need are
The Target Service URIThis is read from the namespace attribute of the soap:operation element. For this example, it works out to be urn:xmethods-Temperature.
The Web service method being invoked by our programThis information is based on the operation element. For this service, we are using getTemp.
Any parameters needed for the Web service to run successfullyThese were stored in the message element. From this we find <part name="zipcode" type="xsd:string" />.
The URL of the SOAP server in questionFor us it's stored in the soap:address element, within the location attribute, which gets us a value of.
This translates into one input parameter called zipcode of type String.
Using these values, we can now write a quick JSP page to access this Web service. The code is shown in Listing 3.7. (Save this file as webapps\xmlbook\chapter3\ AccessService.jsp.)
Listing 3.7 Accessing the XMethods TemperatureService Web Service
<%@page <input type="text" name="zip" id="zip" value="<%= ls_zipcode %>" /> <input type="submit" value="Enter New Zip Code" /> </form> </body> </html>
When this JSP page is accessed, it will produce a page that looks like Figure 3.5.
Figure 3.5 Running AccessService.jsp.
Let's review the example. Most of the code in this example is identical to the RunFirstService.jsp example. The only difference is that this remote Web service is using parameters and our first example didn't need any parameters (because the service didn't use any). This shows that from a programming viewpoint the location of the Web service (local versus Internet) doesn't make much of a difference. From a practical viewpoint, running a remote service might incur additional overhead depending on the location of the Web service. However, those are design issues we will examine in a later chapter.
Let's look at the differences in the code.
While the JSP page is a little more complicated in that we use an HTML form on it (to submit a zip code back to ourselves), we only want to focus on the Web service differences. The only difference from a Web service point of view is the addition of parameters.
Usually, to use parameters we need to use a Vector object to place the parameter arguments:
Vector params = new Vector ();
Once we have the Vector, we place the arguments into our local copy of the Vector object:
params.addElement (new Parameter("zipcode", String.class,ls_zipcode, null));
Once we've finished adding parameters to the Vector, we append the Vector to the Call object.
call.setParams (params);
Everything else is as beforewe invoke our Call object and receive a response.
Calling a service isn't especially hardall the pain is in setting the service up and getting the tools together. | http://www.informit.com/articles/article.aspx?p=28494&seqNum=5 | CC-MAIN-2019-04 | refinedweb | 2,861 | 66.44 |
Actually inspired by the description of the problem itself XDDD.
""For example, [1,7,4,9,2,5] is a wiggle sequence because the differences (6,-3,5,-7,3) are alternately positive and negative.""
Greedy solution, use deque so we have O(1) popleft.
class Solution(object): def wiggleMaxLength(self, nums): from collections import deque if len(nums) <= 1: return len(nums) diff = deque([nums[i] - nums[i-1] for i in xrange(1, len(nums))]) total = 1 current = diff.popleft() while diff: val = diff.popleft() if val * current < 0 : total += 1 current = val return total +1
You could use a generator instead of deque:
def wiggleMaxLength(self, nums): if len(nums) <= 1: return len(nums) diff = (nums[i] - nums[i-1] for i in xrange(1, len(nums))) total = 1 current = next(diff) for val in diff: if val * current < 0 : total += 1 current = val return total +1
@StefanPochmann Thanks for pointing this out and fixing the test case.
The way I handle it is to check if
if sum(nums) == nums[0]*len(nums): return 1
Not sure if it is the general way to fix this. :)
@guangying094 said in Easy to understand python O(n) short solution:
if sum(nums) == nums[0]*len(nums): return 1
That rather makes things worse, as it would for example return
1 for input
[1,2,0] instead of the correct
3.
@StefanPochmann Oops that's right lol. Thanks.
So I need to check if every elements are equal.
if all(x == nums[0] for x in nums) : return 1
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/51840/easy-to-understand-python-o-n-short-solution | CC-MAIN-2017-47 | refinedweb | 276 | 58.62 |
Use PowerShell to Explore Windows Defender Preferences
Dr Scripto
Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell 4.0 in Windows 8.1 to explore Windows Defender preferences.
Microsoft Scripting Guy, Ed Wilson, is here. Well things are certainly shaping up to be exciting. Last weekend, I upgraded my Surface Pro to Windows 8.1 via the store. It took about 30 minutes, and it was absolutely painless. The long part was downloading the 3 GB file. Then it started the installation, and I had to agree with the license statement. Finally, it wanted to know how to personalize the device. Besides that, it was gravy.
The Scripting Wife and I are getting ready to go to Atlanta this weekend for the PowerShell Saturday 005 event. I am making two presentations, and there are several other awesome speakers who will be there. There are still some tickets available for this event, so it is not too late to sign up. I know there are some people who are driving to the event from as far away as Texas, so it will be a great time to see some of your favorite Windows PowerShell people. Check it out, you will be glad you did.
Because Windows 8.1 is now in general availability, I thought I would take some time to write about one of the cool new modules. I am running Windows PowerShell 4.0 on Windows 8.1.
Note This is the second post in a three-part series about the Windows Defender module in Windows 8.1. For basic information about the Windows Defender module, please see Exploring the Windows Defender Catalog.
One of the cool things about Windows PowerShell is that it always (at least nearly always) works the same. This means that I can use the Get-Help cmdlet to find out how to use a cmdlet or CIM function. I can use the Help function, to see Help information one page at a time. It does not matter what the module, or what the cmdlet.
But with most of the Get* type of cmdlets and functions, I do not even need to use Help. I can simply type the cmdlet (or function) name, and voila, it spews forth data—at least that is the way that well designed cmdlets generally behave. I should not have to look at Help to find out how to get information.
Note The Windows Defender commands are technically functions. They are CIM wrapped, based on a new WMI namespace that is added to Windows 8.1. I will refer to them as functions, or occasionally as a command. But I will not call them cmdlets (unless I slip up and make a mistake) because they are not technically cmdlets. Using Get-Member or Get-Command easily reveals this information.
I can use the Get-MpPreference cmdlet to obtain information about my Windows Defender preference settings. The command and the output associated with the command are shown here.
The bad thing is that some of the output does not make sense. For example, the value of the ScanScheduleDay is 0. What does that mean? Is it Sunday, or Monday, or whatever? I know that “computer numbers” often begin with 0 instead of 1, so I guess that maybe it means scan on the first day of the week. So I use the Get-Culture cmdlet and I look at the DateTimeFormat property to see what the first day of the week is. The command and output are shown here.
I can see that the value of the FirstDayOfWeek property from the DateTimeFormat object is Sunday. So, I guess that my ScanScheduleDay value of 0 is Sunday. But that is just a guess. I would like to make sure. So I check the value of Get-Help to see if there is any Help here.
I use the command Get-Help Get-MpPreference –full, and I obtain the following output:
I can tell you that in this case, the Help is no help. Then it dawns on me. Wait! In reality, this is WMI. Hey, it is a CIM function, which means that under the covers, there is bound to be a WMI class. Groovy. On MSDN, most WMI classes are well documented.
However, searching for “Windows PowerShell Help” in this case does not help. This is because, as I found, all it does is document the way Windows PowerShell works—and well, duh, I know HOW Windows PowerShell works. I need to know what the output means.
So I need to look up WMI. I type a Bing query for “PowerShell Defender ScanScheduleDay” and I get back nothing worthwhile. I do the same search on MSDN. Again, I get no hits. Hmmm…time to go “old school” on this issue.
So I pipe the results from the Get-MpPreference function to Get-Member, and I look at the object that returns. Ahhhhh…now I can see some sense. The command and output are shown in the image that follows.
So I now search for “MSFT_MpPreference” directly on MSDN, and I discover that Windows Defender WMIv2 APIs is documented. The page on MSDN lists all of the WMI classes. Sweet!
As it turns out, it was a good thing I looked up the answer because 0 is not Sunday. Sunday, as it turns out, is 1. The MSDN portion is shown here.
So, that is it. I am able to discover the information I need to bring clarity the output.
That is all there is to using the Windows Defender module. Join me tomorrow when I will talk about using the Windows Defender functions to initiate scans and to update the article. Is there a way to get Defender properties on remote computers using powershell. This is to streamline validating new systems are configured properly. | https://devblogs.microsoft.com/scripting/use-powershell-to-explore-windows-defender-preferences/ | CC-MAIN-2022-05 | refinedweb | 976 | 75.71 |
Would like to hear a little more from others, or any recommendations around
this. We've got other systems like SQL connections, SSH with user:pass,
API tokens.
Is there any recommended way to hide these from the output logs?
On Mon, Feb 26, 2018 at 10:41 PM Hbw <brian@heisenbergwoodworking.com>
wrote:
> Aws profiles on the workers - the creds are on the machines, but not
> exposed. Boto/cli takes these profile names instead of access key/secret
> for just this kind of use case.
>
> Sent from a device with less than stellar autocorrect
>
> > On Feb 26, 2018, at 1:22 PM, jeeyoung kim <jeeyoungk@gmail.com> wrote:
> >
> > Hi everyone,
> >
> > I’m wondering how people work around accidentally writing credentials on
> > bash operator template page / logs.
> >
> > For example, I may have PostgreSQL operator to copy data into Redshift.
> >
> > COPY TABLE_NAME from 's3://.../something.manifest.json'
> > access_key_id '{{ params.AWS_ACCESS_KEY }}'
> > secret_access_key '{{ params.AWS_SECRET_KEY }}'
> >
> > Or a command that exports from mongo
> >
> > mongoexport \
> > --assertExists \
> > -h {{ connection.host }} \
> > {% if connection.login %} -u {{ connection.login }} {% endif %}\
> > {% if connection.get_password() %} -p {{ connection.get_password()
> > }} {% endif %}\
> > -d {{ connection.schema }}
> > ...
> >
> > However, when this operator is executed (or when the template is rendered
> > on the UI), the credentials are written to the log files / clearly
> visible
> > on the UI, which is problematic.
> >
> > There are many other cases where this can happen, and I’m wondering what
> is
> > a solution for it.
> >
> > What would be ideal is:
> >
> > - Prevent credentials from accidentally being shown in “show rendered
> > template” screen.
> > - Prevent credentials from being written to the logs.
> >
> > Thanks.
> >
> > -Jeeyoung Kim
> >
> | http://mail-archives.apache.org/mod_mbox/airflow-dev/201803.mbox/%3CCAAhq+cy8FRDQ7Bf5NhKZ-QZZZGc=broZrQOMEM1g0K0pRv3=rg@mail.gmail.com%3E | CC-MAIN-2019-43 | refinedweb | 258 | 52.15 |
Before we can start the discussion of why this exception occurs, it is necessary to understand a little bit about how Windows works with regard to interacting with devices. When a device requires the attention of the processor for the system, it generates an interrupt that causes the processor to give the device attention and handle the device's request. The Windows hardware abstraction layer (HAL) maps the hardware interrupt numbers to software interrupt request levels (IRQLs). IRQLs provide a mechanism that allows the system to prioritize interrupts, where the higher numbered interrupts are processed first (and preempt processing at all lower IRQLs). After the interrupt is handled, the processor returns to the previous (lower) IRQL.
The IRQLs are defined in the wdm.h file in the Windows Driver Development Kit (for me this is \WinDDK\7600.16385.1\inc\ddk\wdm.h).
#if defined(_X86_) // // Interrupt Request Level definitions // #define PASSIVE_LEVEL 0 // Passive release level #define LOW_LEVEL 0 // Lowest interrupt level #define APC_LEVEL 1 // APC interrupt level #define DISPATCH_LEVEL 2 // Dispatcher level #define CMCI_LEVEL 5 // CMCI handler level #define PROFILE_LEVEL 27 // timer used for profiling. #define CLOCK1_LEVEL 28 // Interval clock 1 level - Not used on x86 #define CLOCK2_LEVEL 28 // Interval clock 2 level #define IPI_LEVEL 29 // Interprocessor interrupt level #define POWER_LEVEL 30 // Power failure level #define HIGH_LEVEL 31 // Highest interrupt level #define CLOCK_LEVEL (CLOCK2_LEVEL) #endif #if defined(_AMD64_) // // Interrupt Request Level definitions // #define PASSIVE_LEVEL 0 // Passive release level #define LOW_LEVEL 0 // Lowest interrupt level #define APC_LEVEL 1 // APC interrupt level #define DISPATCH_LEVEL 2 // Dispatcher level #define CMCI_LEVEL 5 // CMCI handler level #define CLOCK_LEVEL 13 // Interval clock level #define IPI_LEVEL 14 // Interprocessor interrupt level #define DRS_LEVEL 14 // Deferred Recovery Service level #define POWER_LEVEL 14 // Power failure level #define PROFILE_LEVEL 15 // timer used for profiling. #define HIGH_LEVEL 15 // Highest interrupt level #endifThere are 3 sets of IRQLs defined (x86, x64, and ia64). I focus on x86 and x64 because these platforms comprise the vast majority of systems. Maintaining an IRQL of 0 (PASSIVE_LEVEL) is one of the main goals of the device drivers and the system because all user mode code is executed at the passive level. The thread scheduler for the system operates at IRQL 2 (DISPATCH_LEVEL) and generates interrupts to change the currently executing thread. Device interrupts occur at level 3 and above (and thus prevent the scheduler from switching threads). A direct implication of this interrupt behavior is that device drivers operating at or above DISPATCH_LEVEL cannot access paged memory (due to the context switch required for the file system driver to pull the memory page from disk) and can only use memory from the non-paged pool.
Bugcheck code 0x0000000A (10 in decimal) occurs when a driver attempts to perform a task that can only be performed at a lower IRQL, such as reading paged memory or performing a task using a call that the thread scheduler can preempt. Since the system as at or above Dispatch (DPC) level, the thread scheduler cannot force required context switch and crashes the system through a call to KeBugCheckEx (this causes an interrupt at HIGH_LEVEL, 31 on x86 and 15 on x64 and prevents any other device interrupts while crash information is saved to the hard drive and the system is brought down safely). The call to KeBugCheckEx results in a blue screen of death (BSOD).
IRQL_NOT_LESS_OR_EQUAL can often be resolved by updating (or downgrading in some cases) the driver that caused the crash (Note, this error is also very similar to 0xD1 DRIVER_IRQL_NOT_LESS_OR_EQUAL. In some cases the BIOS may also need to be updated. The following is an example process for debugging this issue.
Note that this is only an example, the driver causing your error will likely be different.
First, open the crash dump with WinDbg. Click here for instructions on opening a crash dump.
Next, execute the !analyze -v debugger command. Some output including the bug code, stack trace, and suspected driver are output. Details on each part of the analyze output are discussed below.
The !analyze -v output starts out with a description of the parameters passed to KeBugCheckEx. In this case, this crash was caused by the driver attempting to read invalid (or paged) memory at DISPATCH_LEVEL:, bitfield : bit 0 : value 0 = read operation, 1 = write operation bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status) Arg4: 83227b06, address which referenced memory Debugging Details: ------------------ READ_ADDRESS: GetPointerFromAddress: unable to read from 82f7b718 Unable to read MiSystemVaType memory at 82f5b160 00000004 CURRENT_IRQL: 2 FAULTING_IP: hal!HalPutScatterGatherList+a 83227b06 8b4104 mov eax,dword ptr [ecx+4] CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT BUGCHECK_STR: 0xA PROCESS_NAME: SystemNext we see the address of the trap frame, registers, and stack trace at the time of the crash. For this error, this is less relevant because the driver is well identified. In some cases it may be necessary to dig in further using the driver verifier or by following the trap frames in a full or kernel memory dump to fully rebuild the call stack.
TRAP_FRAME: b8732af0 -- (.trap 0xffffffffb8732af0) ErrCode = 00000000 eax=88b81740 ebx=88caf280 ecx=00000000 edx=00000000 esi=89cbc420 edi=88a4b5f8 eip=83227b06 esp=b8732b64 ebp=b8732b6c iopl=0 nv up ei pl zr na pe nc cs=0008 ss=0010 ds=0023 es=0023 fs=0030 gs=0000 efl=00010246 hal!HalPutScatterGatherList+0xa: 83227b06 8b4104 mov eax,dword ptr [ecx+4] ds:0023:00000004=???????? Resetting default scope LAST_CONTROL_TRANSFER: from 83227b06 to 82e5982b STACK_TEXT: b8732af0 83227b06 badb0d00 00000000 8a3fc504 nt!KiTrap0E+0x2cf b8732b6c 8c80e653 88b81740 00000000 00000000 hal!HalPutScatterGatherList+0xa b8732b88 92530159 88a4b5f8 00000000 89cbc420 ndis!NdisMFreeNetBufferSGList+0x27 WARNING: Stack unwind information not available. Following frames may be wrong. b8732be8 9252ca0e 88ca6000 88caf280 0000000a e1k6232+0x16159 b8732c58 9252b093 88ca6000 00000000 b8732ca0 e1k6232+0x12a0e b8732c74 8c860309 88ca6000 00000000 b8732ca0 e1k6232+0x11093 b8732cb0 8c8416b2 88a4b67c 00a4b668 00000000 ndis!ndisMiniportDpc+0xe2 b8732d10 8c828976 88a4b7d4 00000000 8b43f0e8 ndis!ndisQueuedMiniportDpcWorkItem+0xd0 b8732d50 830216d3 00000002 9f4c557c 00000000 ndis!ndisReceiveWorkerThread+0xeb b8732d90 82ed30f9 8c82888b 00000002 00000000 nt!PspSystemThreadStartup+0x9e 00000000 00000000 00000000 00000000 00000000 nt!KiThreadStartup+0x19 STACK_COMMAND: kbFinally, we get some information on the symbols and what the debugger suspects the faulting module is. In this case, it is related to the Intel Wireless card in this laptop (using the driver e1k6232.sys).
FOLLOWUP_IP: e1k6232+16159 92530159 ?? ??? SYMBOL_STACK_INDEX: 3 SYMBOL_NAME: e1k6232+16159 FOLLOWUP_NAME: MachineOwner MODULE_NAME: e1k6232 IMAGE_NAME: e1k6232.sys DEBUG_FLR_IMAGE_TIMESTAMP: 4bbae470 FAILURE_BUCKET_ID: 0xA_e1k6232+16159 BUCKET_ID: 0xA_e1k6232+16159 Followup: MachineOwner ---------In some cases, it may be desirable to check the BIOS version. USe the !sysinfo machineid debugger command to get this information about the BIOS and the make/model of the machine that generated the dump. This was caused by a Dell Latitude e6410 and it is running a BIOS from 2010. In this case, the BIOS is out of date and an update may help resolve the issue that caused this crash.
2: kd> !sysinfo machineid Machine ID Information [From Smbios 2.6, DMIVersion 38, Size=3634] BiosMajorRelease = 4 BiosMinorRelease = 6 BiosVendor = Dell Inc. BiosVersion = A05 BiosReleaseDate = 08/10/2010 SystemManufacturer = Dell Inc. SystemProductName = Latitude E6410 SystemVersion = 0001 SystemSKU = BaseBoardManufacturer = Dell Inc. BaseBoardProduct = 0667CC BaseBoardVersion = A01The dates of all of the drivers loaded at the time of the crash can be determined using the lm n t debugger command. More information about a specific driver can be gained using the lm vm drivername command. This can be helpful to identify whether an old antivirus or an older driver might be contributing to the crash.
2: kd> lm n t start end module name 80bac000 80bb4000 kdcom kdcom.dll Mon Jul 13 19:08:58 2009 (4A5BDAAA) 82e13000 83223000 nt ntkrpamp.exe Fri Jun 18 21:55:24 2010 (4C1C3FAC) 83223000 8325a000 hal halmacpi.dll Mon Jul 13 17:11:03 2009 (4A5BBF07) ... 924e1000 9251a000 dxgmms1 dxgmms1.sys Mon Nov 01 20:37:04 2010 (4CCF7950) 9251a000 92553000 e1k6232 e1k6232.sys Tue Apr 06 01:36:16 2010 (4BBAE470) 92553000 9259e000 USBPORT USBPORT.SYS Mon Jul 13 17:51:13 2009 (4A5BC871) ... 2: kd> lmvm usbport start end module name 92553000 9259e000 USBPORT (deferred) Mapped memory image file: c:\symbols\USBPORT.SYS\4A5BC8714b000\USBPORT.SYS Image path: \SystemRoot\system32\DRIVERS\USBPORT.SYS Image name: USBPORT.SYS Timestamp: Mon Jul 13 17:51:13 2009 (4A5BC871) CheckSum: 0004BC3B ImageSize: 0004B000 File version: 6.1.7600.16385 Product version: 6.1.7600.16385: 6.1.7600.16385 FileVersion: 6.1.7600.16385 (win7_rtm.090713-1255) FileDescription: USB 1.1 & 2.0 Port Driver LegalCopyright: © Microsoft Corporation. All rights reserved.Getting further help
If the debugger output references the NT kernel (ntoskrnl.exe, ntkrnlpa.exe, ntkrnlmp.exe, and ntkrnlpamp.exe), the driver verifier may be necessary to further pinpoint the problem.
After analyzing the dump, if you have not been able to solve your issue, then you seek help from the hardware vendor, the forums, or directly from Microsoft. The hardware vendor is the most preferred out of the three. If the vendor determines that there is a bug in the driver, then they may ask for a kernel/full memory dump to help them analyze the problem.
If you seek help in the forums, then be sure to upload the dumps for your system in an accessible location and post a link to the thread that you create. See this post for more details. Users in the forums can rarely tell you more information than is in this post.
Microsoft may not be helpful unless this is related to a Microsoft device driver or a kernel bug, which they will generally tell you it's not a Microsoft bug. Microsoft support is also relatively expensive.
Best of luck!
Have an idea for something that you'd like to see explored? Leave a comment or send an e-mail to razorbackx_at_gmail<dot>com
References:
Mark Russinovich, David Solomon, and Alex Ionescu. Windows Internals: Covering Windows Server 2008 and Windows Vista. 5th edition. Microsoft Press
Bug Check 0xA: IRQL_NOT_LESS_OR_EQUAL
Microsoft Windows Driver Development Kit
Can you please make also a version for ppl with less IQ xD I dont really get it, please just write it like:
1. Download bla bla bla
2. Run it
3. Open this file
Etc. with pics please
Ive got the bluescreen problem to, but mine closes when i try to update the game called
"aion".
Hope for more help please :)
I agree with Ranger.
I just got this error yesterday, and now it won't stop. Every time my computer starts, in safe or normal mode, I get this screen. I tried several different ways to try to fix it, and then finally turned to the internet when none of those worked.
I'm sure this page would be helpful if I knew more about computers, but I'm not getting much out of it. It seems to be over my head.
Received a BSOD and found your website. It's awesome! I agree with the other posters that you go deep into detail. Thank you for that! If we really want to solve a BSOD on our own you make it much more feasible. | http://mikemstech.blogspot.com/2011/11/how-to-troubleshoot-blue-screen-0xa.html | CC-MAIN-2017-22 | refinedweb | 1,849 | 55.13 |
How can a robot tell it is stuck?
When making a self-driving robot there are a lot of things to consider, such as:
- Figuring out where you are
- Deciding where to go next
- Working out what speed to move at
but how do you tell when things have gone a bit wrong?
With racing the most common problem is getting stuck, which can happen for any number of reasons:
- Running into the wall
- Crashing into another robot
- Running into an obstacle
- Leaving the track and getting lost
In Formula Pi hitting walls and other robots will happen so we need to be able to get the YetiBorg going again.
The easy part is knowing what to do, most of the time simply reversing slightly is enough to recover.
The challenge is to figure out we are stuck from what the Raspberry Pi camera can see.
What we can do is see if the camera image is changing.
If we are moving along the image should be different between the last frame and the next frame.
So if we take two frames:
we can then produce a difference image using Open CV:
import cv2 frame1 = cv2.imread('frame1.jpg') frame2 = cv2.imread('frame2.jpg') frameDiff = cv2.absdiff(frame1, frame2) cv2.imwrite('diff.jpg', frameDiff)
with the result being:
So how does this difference image help us?
The basic idea is that two frames with movement will have a much larger difference then two frames from the same place.
This difference image is from two frames in the same place:
We can simplify the image into a single number to check by taking the average value from the differences:
change = frameDiff.mean() print change
In the cases above we got about 4.22 for the image with movement and 1.97 for the image without movement.
A simple threshold can then be used to see if we are moving or not:
if change > 2.5: moving = True else: moving = False
To avoid false positives we can simply check we get many images without enough change, then we know we are stuck.
In order to make things even more accurate we crop the image first so that we are only looking where the track should be.
Now that we know we are stuck we can reverse a little bit and get on with racing again :)
Add new comment | https://www.formulapi.com/blog/stuck | CC-MAIN-2020-05 | refinedweb | 396 | 78.48 |
This dataset contains field boundaries and crop type information for fields in Kenya. PlantVillage app is used to collect multiple points around each field and collectors have access to basemap imagery in the app during data collection. They use the basemap as a guide in collecting and verifying the points.
Post ground data collection, Radiant Earth Foundation conducted a quality control of the polygons using Sentinel-2 imagery of the growing season as well as Google basemap imagery. Two actions were taken on the data 1)several polygons that had overlapping areas with different crop labels were removed, 2) invalid polygons where multiple points were collected in corners of the field (within a distance of less than 0.5m) and the overall shape was not convex, were corrected. Finally, ground reference polygons were matched with corresponding time series data from Sentinel-2 satellites (listed in the source imagery property of each label item).
PlantVillage (2019) "PlantVillage Kenya Ground Reference Crop Type Dataset", Version 1.0, Radiant MLHub. [Date Accessed]
from radiant_mlhub import Dataset ds = Dataset.fetch('ref_african_crops_kenya_01') for c in ds.collections: print(c.id)
Python Client quick-start guide
RADIANT EARTH | https://mlhub.earth/data/ref_african_crops_kenya_01 | CC-MAIN-2022-40 | refinedweb | 192 | 56.15 |
This page is not entirely complete yet, but is hopefully error-free.
If you find anything wrong, misleading or missing, please let me know)
access modifier
An access-modifier refers to the public, private and protected keywords found before instance variables and methods. Briefly:
accessor
An accessor is a method which is used to get the value of something. For example, the getHeight() method is an accessor. There is nothing
special about accessors in Java terms, it's just a word used to describe what they do (access something).
alias
An alias is, literally, a different name for the same thing. Aliases can only be created between object types (see data-type). Consider this code:
Thing a, b;
a = new Thing();
b = a;
This declares two `Thing's, called `a' and `b', initially undefined. The first assignment creates a new Thing
and assigns it to `a'. The second assignment assigns `a' to `b'. This does not create a new Thing, however. It makes
`b' point at the same Thing that `a' points at. After the assignments, `a' and `b' are aliases for the same Thing. A change
to the contents of `a' will be visible in `b' -- because they are the same object.
Though the above fragment does not demonstrate it, aliases can be useful for efficiency (e.g. allowing separate parts of a program access to some shared object). However,
they can also destroy program logic.
Looking a bit deeper at Java, it becomes clear that at least some aliasing is endemic. The declaration "Thing a;" does not declare a "Thing" as such, but rather
a reference to a "Thing" object. The "Thing" is created from "new Thing();" (using the new operator), which returns a reference to some
new "Thing" object. Most often (and in the above fragment) the reference returned by "new" is assigned to some variable (not assigning it and just
writing "new Thing();" is allowed, but in many cases is not useful).
assignment
Assignment is essentially the single-equals operator `='. The left-hand side of the assignment is always a variable. The right-hand side can
be any valid expression.
attribute
An `attribute' refers to a variable declared inside a class. These are also known as ``fields'' and ``class variables''.
boolean expression
A boolean-expression is an expression which has a boolean (true of false) result. Common examples include the relational
operators: ==, !=, <, <=, >, >=. Equality operators (`==' (equals) and `!=' (not equals)) work
on objects, but test whether the two objects given are the same object (i.e. aliases for each other).
class
A class can be viewed in a couple ways:
For example, one might refer to "the setHeight method in the Triangle class".
code block
A code-block is the code found between the left and right curly braces `{' and `}', in other words, a block of code. Code-blocks will generally
be nested through the use of if-statements and for-loops.
constructor
A constructor is a special ``method'' within a class which is called when the class is instantiated with the new-operator. For example, a simple class
might look like:
class Foo {
private int x;
private int y;
// constructor
public Foo ()
{
// initialise private variables
x = 0;
y = 0;
}
// another constructor
public Foo (int x, int y)
{
// initialise private variables from parameters
this.x = x;
this.y = y;
}
// so we can print() it
public String toString ()
{
return ("(" + x + "," + y + ")");
}
}
This class can be constructed (and used) in two ways:
class Test1 {
public static void main (String args[])
{
Foo bar1, bar2;
bar1 = new Foo ();
bar2 = new Foo (42, 99);
System.out.println ("bar1 = " + bar1);
System.out.println ("bar2 = " + bar2);
}
}
As many constructors as required can be added to a class, although creating a vast amount of them in a single class is not recommended.
Note/update: Constructors are not methods in the sense of being methods, but it's probably the most accurate description for what it is
(a chunk of code wrapped up with a name and optional formal parameters). Constructors look like methods, except they have no return type (not even
`void'). If you add a return type, it becomes a regular method, and is no longer a constructor (this could be a potential source of errors..).
data type
Data-types, generally, provide a type-system for programming langauges. Types are a somewhat abstract concept -- in most cases, the actual hardware has
very little knowledge of types (typically 8, 16, 32, 64 and 128-bit signed and unsigned integers; 32, 64 and 128-bit floating-point numbers; and addresses).
One way to think about types is as ``shaped holes'', with variables being the pieces that fit in those holes. When you declare a variable,
you are effectively creating a new `piece'. The `holes' are created wherever a variable might be used. This includes type-casts, formal-parameters
to methods (also shown in the method-signature), and operators.
The picture on the right tries to visualise this: imagine the three shapes represent variables of type `char', `int' and `double' in Java,
that could be the code:
char x;
int y;
double z;
The three `holes' could corresponding represent the formal parameters to a particular method, for example:
public void foo (char a, int b, double c)
{
// body of method
}
The process of putting these two together (putting the `shapes' in the `holes') would be the use of those three variables in a method-call to `foo',
for example:
foo (x, y, z);
In this case, the shapes clearly fit neatly. If the ordering of parameters is changed, however (typical programming error), the shapes will no longer fit
in the holes and a compiler-error is generated.
The above shows an example of primitive-types. Object types are slightly more complicated, especially with inheritance, but fit this
shape-analogy nicely. The picture on the right shows an example of three object `shapes' and three object `holes', for an `Object', `Integer' and
`String':
Object obj;
Integer val;
String str;
As above, these `shapes' can be plugged into their corresponding `holes' without a problem. But unlike that above, some of the `shapes' fit through different
`holes'. The `String' shape will fit in the `Object' hole, but the `Object' shape will not fit in the `String' hole.
This means, for example, that a `String' can be used as an actual-parameter to a method whose formal-parameter is an `Object'. This is allowed
because `String' is inherited from `Object'. Similar rules apply to all the various object-types (classes) and their
respective inheritance trees. In Java, any object is automatically inherited from `Object' -- so any object `shape' will fit an `Object' hole.
There is a certain element of `magic' in the Java type-system. This includes automatic type promotion for some primitive types and some fairly special
handling for `String's. Automatic type promotion happens when, for example, the `String' shape is placed in an `Object' hole -- it
automatically fits. The string-handling magic is a bit more subtle: if the compiler expects a string, and is given an object, it will use the `toString'
method of that object, if present. primitive-types don't have methods -- the conversion to a `String' for these is just magic..
There is also one shape that cannot exist -- `void'. This is generally only used to explicitly indicate `nothing'.
declaration
A declaration is used to declare a variable of a certain type. To save typing, more than one variable can be declared in a single declaration,
by seperating the variables with commas. For example:
int x, y, z;
String some_string;
Board area;
Triangle tri1, tri2;
byte some_byte;
Declarations can appear at any point in code, making the declared variable(s) available from the declaraction to the end of the code-block (closing brace).
encapsulation
Encapsulation is essentially collecting together variables and methods inside a class. Classes are generally built in such a way that
they model one particular aspect of a problem. For example, the "Triangle" class is used to represent a triangle, providing methods which allow the
programmer to interact with a Triangle object. The implementation of a class (code and instance-variables inside it) should be transparent to the programmer
(unless said programmer wrote the class). Only the thing a programmer needs to know about a class are its public methods and constructors.
exception
Exceptions are a way of indicating failure inside Java programs. Exceptions are thrown by a method or code-block, and caught
`higher-up' in the program. In order to catch an exception, a `try/catch' block must be used. This is a special construct, that attempts to
execute some code, and if that code generates an exception, execution continues at the place where the exception is caught. This implies that if an exception
is thrown, but is not caught, the program/thread will terminate -- and that is exactly what happens.
The actual `execption' thrown is an instance of a class that either extends `Exception' or implements the
`Throwable' interface. This is more useful for creating your own exceptions, rather than just using them.
Any method that throws an exception must declare it as part of its method-signature. There is a slight exception to this in that any code
can throw the special `RuntimeException', or anything decended (by inheritance) from it. `RuntimeException' can be caught, however.
For example:
public void foo () throws Exception
{
Exception e;
e = new Exception ("from foo()");
throw e;
}
A corresponding `try/catch' block for this could be:
try {
System.out.println ("here 1");
foo ();
System.out.println ("here 2");
} catch (Exception x) {
System.out.println ("abort! " + x.getMessage());
}
When run, this code will not print out the second (``here 2'') message -- execution skips from the `throw' inside `foo' to the
corresponding `catch' block. Multiple `catch' blocks are also permitted, enabling some selection over what exceptions are caught. As with the
usual rules of inheritance and type compatability (see data-types), a `catch' block for a particular exception will catch that exception
and any of its sub-classes. The ordering of `catch' blocks has an effect here, for example:
try {
foo ();
} catch (Other o) {
System.out.println ("Other: " + o.getMessage ());
} catch (Exception e) {
System.out.println ("Exception: " + e.getMessage ());
}
If `foo' throws an exception that is really an `Other', the first `catch' block will be used. For other exceptions, the second
`catch' block would be used. This example, however, will not catch exceptions that do not extend the `Exception' class -- i.e.
those that implement `Throwable'; those exceptions must be caught explicitly.
expression
An expression is something which has a value. For example, "4 + 5" is an expression, as is "(((x + 1) / 10) - thing.getHeight())".
Anything which is calculated is generally an expression. Expressions only make sense when dealing with primitive-types (boolean,
int, float, double). One rare exception to this is String addition (which really means string concatenation).
flow of control
The flow-of-control in a program is the path traced during execution (running the program). Different statements affect the flow-of-control in different ways.
For instance, when an if-statement is reached, the flow-of-control will either run through it (if the condition is true), or will skip it (if the condition is false).
for loop
A for-loop has the syntax "for (foo; bar; blip) { code }". foo is replaced with a statement which is executed before the looping starts.
bar is replaced with a boolean-expression which determines whether code should be executed. blip is a statement which is executed
at the end of the loop (after code). For-loops are sufficient for just about all loop-programming requirements. Here is a simple example of a loop:
for (int x = 0; x < 5; x++) {
System.out.println ("Hello world! [x = " + x + "]");
}
This would print "Hello world! [x = Y]" five times, with `Y' being replaced the value of `x'.
Any or all of foo, bar, blip and code can be omitted, if they are not needed. For example, the loop:
for (;;) {
System.out.println ("Hello world!");
}
will print "Hello world!" forever (well, until the program is terminated). There is also a while-loop, which is a simplified for-loop.
identifier
An identifier is the name of something. This includes the names of variables, methods, classes, etc.
if statement
An `if' statement is used to conditionally execute code based on a boolean-expression. An `if' statement may optionally have an `else'
part, that is executed if the boolean expression is false. For example:
if (x == 42) {
System.out.println ("x is 42");
} else {
System.out.println ("x is not 42");
}
In this case, the braces (`{' and `}') are not strictly required, however, it is good practice to always use them. The slight exception to this rule
is when writing out cascading `if's, described below.
`if' statements can also be nested, to create a cascading effect. This is useful when testing for one `true' condition of many. For example:
if (x == 42) {
System.out.println ("x is 42");
} else if (y == 42) {
System.out.println ("x is not 42, but y is");
} else if (z == 42) {
System.out.println ("neither x nor y are 42, but z is");
} else {
System.out.println ("nothing was 42");
}
Strictly speaking, there could be more braces in this -- around each of the first two `else' conditions. However, since the `else' code is always
another `if' (in the first two cases), these braces can be safely omitted -- it also helps to keep the indentation sane. Contrast with:
if (x == 42) {
System.out.println ("x is 42");
} else {
if (y == 42) {
System.out.println ("x is not 42, but y is");
} else {
if (z == 42) {
System.out.println ("neither x nor y are 42, but z is");
} else {
System.out.println ("nothing was 42");
}
}
}
That is significantly less pleaseant and a lot harder to read.
implementation
An `implementation', in Java-speak, refers to a class that provides an implementation of an interface.
inheritance
Inheritance has two main uses in Java. Firstly, it allows for code re-use. When one class extends another,
it inherits the public and protected methods and attributes from the class it is extending.
The new class can override inherited methods, and/or add its own new methods and attributes. The second use of inheritance
relates to the type-system. When one class extends another, the new class is type-compatible with the ancestor (and all ancestors
back to `Object'). This type-compatability is similar to interfaces, but the `interface' is a real class.
Here is a fairly simple example demonstrating inheritance:
public class Base
{
public void sayHello ()
{
System.out.println ("Hello base world!");
}
}
public class Other extends Base
{
public void sayHello ()
{
System.out.println ("Hello other world!");
}
}
When the `Other' class extends `Base', it inherits the `sayHello()' method. This is then overridden to
provide an `Other'-specific `sayHello()'.
To be completed
instance
The term `instance' refers to a class that has been instantiated at run-time, also known as an object. Instances
of classes are typically created using the `new' operator. For example:
Foo x;
x = new Foo ();
After this code has been executed, `x' is an instance of `Foo'. Unlike objects, it would be unusual (and incorrect)
to refer to null-objects as instances. Furthermore, the type of an instance is not necessarily the same as used in its variable declaration.
For example:
Object x;
x = new Foo ();
The `x' here is an `Object', that is an instance of `Foo'. Java provides a built-in `instanceof' operator that tests whether
some object is an instance of some class. For example:
public void display (Shape s)
{
if (s instanceof Triangle) {
Triangle t = (Triangle)s; // type-cast
...
}
}
If the `Shape' given as a parameter is actually an instance of a `Triangle' then the code inside the if will be executed (performing
the type-cast successfully).
int
integer
See int. Additionally, Java provides an Integer class, which encapsulates the int primitive-type.
interface
An `interface' is a collection of method-signatures, that can be implemented by a class. Interfaces are defined in
a slightly different way to classes, for example:
public interface Enumerable
{
public int getValue ();
// other method signatures
}
The general idea is that we can then define classes that implement this interface -- and those classes must define a
method called `getValue' with the same method-signature. For example:
public class Foo implements Enumerable
{
public int getValue ()
{
return 42;
}
// other methods/attributes belonging to "Foo"
}
Interfaces may also be used as data-types, but you cannot create a new interface. For example:
public void thing (Enumerable e)
{
System.out.println ("e\'s value is " + e.getValue ());
}
public void test ()
{
Foo f = new Foo ();
thing (f); // allowed because "Foo" implements "Enumerable"
}
loop counter
A `loop-counter' is the term given to the typically incrementing variable inside a for-loop. For example:
for (int i = 0; i < 5; i++) {
System.out.println ("Hello world! [i = " + i + "]");
}
In this code, `i' is the loop-counter.
method
A method is a piece of code within class which performs some action upon that class. For example, the "Circle" class has a "setDiameter" method,
which is used to set the diameter of the circle. Methods usually come in three flavours, accessors, mutators and `do-something' methods
(a `goBoing' method for example).
method signature
A `method-signature' is a formal `description' of a method. This includes at least the method's name and return-type, and possibly formal-parameters,
access-modifiers and whether the method is static. Method signatures are found on actual method declarations themselves, and inside interfaces.
For example:
public class Foo {
private static int foo (int x)
{
// body of foo
}
}
public interface Bar {
public double bar ();
}
mutator
A mutator is a method which is used to modify the value of something (usually a private variable inside the class). For example, the setWidth()
method is a mutator. There is nothing special about mutators in Java terms, it's just a word used to describe what they do (modify something).
naming-scheme
Briefly, names can be classified in the following ways, using the following styles:
Much of the naming is convention, rather than enforced. It makes good sense, however, to pick one style and stick to it.
new operator
The new operator is used to create a new instance of a class. new takes the name of a class and arguments to any constructor as
an argument, and returns a new instance of that class. Every class can be created with no arguments to its constructor, eg:
x = new Foo ();
tri = new Triangle ();
area = new Board ();
These three assign the new object to a variable of the corresponding type. Often the declaration of an object
will be accompanied by the creation of it, as in "Foo x = new Foo ();". Assignment need not necessarily be involved either, for example:
Board area = new Board ();
area.add (new Circle (100, Color.blue, 0, 0));
I wouldn't exactly encourage using new in this way, as it makes the code harder to read/understand.
null
`null' is a special `nothing' value for objects. It is used (mainly) to distinguish between instantiated and non-instantiated object variables.
`null' is a valid value for any object, but not primitive-types.
object
`object' refers (literally) to an instance of a class. ie, something generated by the new-operator.
operator
Operators are used to combine operands (the operator's arguments) and produce a result. For primitive-types, a wide-range of operators are
allowed, for example: `+' (add), `-' (subtract, unary-minus), `*' (multiply), `/' (divide), `%' (modulo), `<='
(less-than or equal), etc.
For object types, only the `==' (equality) and `!=' (not-equal) operators are allowed -- and these test for the same-object, not whether
the internal structure of those objects is the same.
The one slightly strange operator is the conditional-operator. This is a sort of mini `if-then-else' construct, but unlike an `if', it has a value.
For example:
y = ((x == 42) ? 0 : (x - 1));
The first part of the operator (before the `?') must be a boolean-expression. The remaining two expressions are the results. If the
condition is `true', then the first expression (between `?' and `:') is the result. Otherwise the second expression (after the `:')
is the result. This operator has many uses, but can result in slightly hard-to-read code.
overriding
to be completed..
primitive data type
Primative data-types are the building blocks of information storage. Java has a few of these:
reference
repetition
Repetition is what a loop performs, ie, repeat a certain code-block until a condition is met. This is handled by for and while
loops in Java.
return type
The `return-type' of a method refers to what type of value it returns. For methods that don't return anything (i.e. return nothing), the
return-type is `void'. Methods may return values of primitive-types, or objects.
run-time
The term `run-time' refers to things that occur when the program is executing (e.g. a user has `run' it).
selection
Selection refers to the "switch" statement in Java. This allows the programmer to take a different actions based upon the value of an expression.
For example:
switch (x) {
case 0:
case 1:
System.out.println ("foo " + x);
break;
case 2:
case 3:
System.out.println ("bar " + x);
break;
default:
System.out.println ("something else " + x);
break;
}
The expression inside the switch statement (in this case `x') must evaluate to a primitive-type. The values against
the "case" statments must be constants of that type.
statement
A statement is a piece of code which performs some action. Statements in programs are mostly assignments, method calls,
conditionals, loops, and other control statements (return, throw, etc.). At the end of most
statements, a semi-colon `;'is found. Statements are different from declarations, as a declaration does not perform any action.
static
The `static' keyword is a modifier that can be used on attribute and method declarations. Essentially it means
``there is only one of these, this one''. As there is only one, `static' attributes and methods are not associated
with instances of classes, but the class itself. For example:
class Foo {
public static int x;
public int y;
public static void bar ()
{
// body of bar
}
}
The ordinary attribute `int y' belongs to instances of `Foo' -- i.e. it does not exist without an
instance of `Foo'. The static attribute `static int x', however, belongs to the class, so it always exists,
as long as `Foo' is defined.
`static' attributes and methods are referenced using the class name -- `Foo' in this case. For example:
int z = Foo.x;
Foo.bar ();
Foo.x = 42;
Inside `Foo's methods, the `Foo.' prefix is often un-necssary. The `static' attributes and methods of a class are
visible to its instances as well, that means they can be accessed in conjunction with an instance, as well as `stand-alone'.
For example, the following method could be added to the `Foo' class:
public int update ()
{
y = x;
x = 0;
}
Because this is a non-static method, we require an instance of `Foo' to call it. For example:
Foo a, b;
a = new Foo ();
b = new Foo ();
a.x = 99;
b.update ();
System.out.println ("b.y = " + b.y + ", Foo.x = " + Foo.x);
When run, this code fragment will output ``b.y = 99, Foo.x = 0'', because the `x' is effectively shared between the two instances of `Foo'.
string
A string is a sequence of characters. In Java, strings are encapsulated by the String class. String constants
are defined with ` "this is a string" ' (some text inside double-quotes). Java has some special handling for strings, which
although being inconsistent makes the programmer's job much easier. Notably there is `+' operator on strings which performs concatenation,
for example, "System.out.println ("The value of x is " + x + ". see ?");". Another oddity is that here the integer `x' is
automatically converted to a string. Similar behavior from objects can be obtained by providing a class with a toString()
method.
termination
Termination of a Java program happens when:
type-cast
A type-cast is used to convert between data-types in Java. For primitive-types, type-casts extend or
truncate precision, or change the representation of, a number. Type-casts are specified as they are in C -- with the `target' type
in parenthesis before the `source' (thing being cast). For example:
int x;
double d;
d = 42.5;
x = (int)d;
One thing to be wary of: what is the value of `x' after the type-cast ? It's 42, rather than 43. This behaviour (of integer to
floating-point conversion) is typically known as ``round-to-zero''. Some languages allow you to specify how such conversions should be
handled, other languages (such as occam) require the conversion to be specified exactly (rounded or truncated).
You can fake ``round-to-nearest'' behaviour by adding 0.5 before the type-cast (conversion).
Type-casts of object types are somewhat different, but are performed using the same syntax. An object can only be cast to something that it
already is, i.e. a class that is present in the object's inheritance heirarchy.
Returning to the shape/hole analogy given in data-types, an object type-cast is a sort of `adapter', as shown in the picture to the
right. In this case, the adapter is a type-cast from `Object' to `Integer'. Whatever we plug into the `Object' hole
must really be an `Integer'. If it is not, a `ClassCastException' exception is thrown. For example:
public void foo (Object obj)
{
Integer i;
i = (Integer)obj; // type-cast
}
public void bar ()
{
Integer i = new Integer (42);
foo (i);
}
public void blip ()
{
String s = "hello, world!";
foo (s);
}
If the `bar' method is called, a new `Integer' is created and passed as an `Object' to `foo'. `foo' then casts
this back into an `Integer', that will succeed, since the `Object' it sees really is an `Integer'. On the other hand, if
`blip' is called, is passes a new `String' (as an `Object') to `foo'. When `foo' attempts to cast this into
an `Integer', it will fail, throwing `ClassCastException'.
variable
A variable in Java is basically something in which information can be stored (appear on the left-hand side of an assignment statement).
Variables are referred to by name in the code, for example "int x;" declares an integer variable called `x'.
while loop
A while-loop has the syntax "while (condition) { code }". As long as the boolean-expression `condition' evaluates to true,
the statements comprising `code' will be executed. This is equivalent to the for-loop "for (; condition;) { code }".
Java also has a do-while loop, which is similar, but the condition is checked after the code has been
executed: "do { code } while (condition);". This tends to be less common however, as `code' is executed at least once, before
`condition' is checked. | http://frmb.org/javaglossary.html | crawl-001 | refinedweb | 4,462 | 65.12 |
Investors in SeaWorld Entertainment Inc. (Symbol: SEAS) saw new options become available today, for the December 20th expiration. One of the key inputs SEAS options chain for the new December 20 SEAS, that could represent an attractive alternative to paying $26.05/share today.
Because the 9.79% return on the cash commitment, or 15.01% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for SeaWorld Entertainment Inc., and highlighting in green where the $24.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $27.00 strike price has a current bid of $3.00. If an investor was to purchase shares of SEAS stock at the current price level of $26.05/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $27.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 15.16% if the stock gets called away at the December 20th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if SEAS shares really soar, which is why looking at the trailing twelve month trading history for SeaWorld Entertainment Inc., as well as studying the business fundamentals becomes important. Below is a chart showing SEAS's trailing twelve month trading history, with the $27.00 strike highlighted in red:
Considering the fact that the $27.52% boost of extra return to the investor, or 17.66% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 51%, while the implied volatility in the call contract example is 45%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $26.05) to be. | https://www.nasdaq.com/articles/seas-december-20th-options-begin-trading-2019-04-26 | CC-MAIN-2021-31 | refinedweb | 334 | 66.03 |
#include <ldap.h>
The Lightweight Directory Access Protocol (LDAP) (RFC 3377) provides access to X.500 directory services. These services may be stand-alone or part of a distributed directory service. This client API supports LDAP over TCP (RFC2251), LDAP over TLS/SSL, and LDAP over IPC (UNIX domain sockets). This API supports SASL (RFC2829) and Start TLS (RFC2830). A Start TLS operation is performed by calling ldap_start_tls_s(3). A LDAP bind operation is performed by calling ldap_sasl_bind(3) or one of its friends. Subsequently, other operations are performed by calling one of the synchronous or asynchronous routines (e.g., ldap_search_ext_s(3) or ldap_search 2253. The ldap_explode_dn(3) routines can be used to work with DNs.
Search filters to be passed to the search routines are to be constructed by hand and should conform to RFC 2254.
LDAP URL are to be passed to routines are expected to conform to RFC 2255. The ldap_url(3) routines can be used to work with LDAP URLs.
These API manual pages are loosely based upon descriptions provided in the IETF/LDAPEXT C LDAP API Internet Draft, a (orphaned) work in progress. | http://man.linuxmanpages.com/man3/ldap.3.php | crawl-003 | refinedweb | 188 | 75.61 |
Let’s talk about Recursion.
Recursion. What is it? It is where a solution to a problem depends on solutions to smaller instances of the same problem. Let’s use the Fibonacci Sequence for this article.
What is the Fibonacci Sequence?
In a Fibonacci sequence a number is the sum of its two preceding numbers. For example, 1 1 2 3 5 8. 8 is made up of 5 and 3, 5 is made up of 2 and 3, 3 is made up of 2 and 1, etc.
We can say, a number is equal to the sum of the number minus one and the number minus 2 in the sequence.
n = (n-1) + (n-2)
If we think this as a function we could do:
fib(n) = fib(n-1) + fib(n-2)
In ruby we could write:
def fib(n)
return fib(n-1) + fib(n-2)
end
The issue we have at this point is we created an infinite loop. We need a base case.
def fib(n)
#base case
if n <= 1
return n
end
return fib(n-1) + fib(n-2)
end
What are we looking at?
Let’s say we want to know what the 4th number in the Fibonacci is.
def fib(n)
#base case
if n <= 1
return n
end
return fib(n-1) + fib(n-2)
endprint fib(4)
The 4 doesn’t meet the base case, so it will skip the if statement. We hit the return fib(n-1) which in this case is fib(4–1) which is fib(3). Since 3 doesn’t meet the base case, it will break down fib(3) to fib(n-1) + fib(n-2). It will first do fib(n-1) which is fib(2). It will keep calling itself till it meets the base case.
Let’s work out the left side of the tree first. We can see it will go all the way down to fib(2) = fib(1) + fib(0). Since the base case is n ≤= 1, fib(1) is 1 and fib(0) is 0. Therefore, fib(2) = 1 + 0. The second number in the Fibonacci Sequence is 1.
So now it will figure out what fib(1) is. As you can already see, an issue with Recursion is that it will keep solving the same problem even if its was solved previously. (Look for a future blog on dynamic programming and memoization.) But let’s keep going and see where it goes.
We know that fib(1) is = 1 so fib(3) = 1 + 1. The third number in Fibonacci is 2 so we’re looking good. Fibonacci = 1, 1, 2 , 3, 5.
Now we need to go up the right side of the tree.
So fib(4) = 3. The 4th number in the Fibonacci Sequence is 3.
Issues with Recursion
Recursion is slow. It also has greater space requirements. Why use it? It’s good to know and some data structure problems can only be solved through recursion. | https://alexduterte.medium.com/lets-talk-about-recursion-dae3857fc88d?source=post_page-----dae3857fc88d-------------------------------- | CC-MAIN-2021-43 | refinedweb | 503 | 84.57 |
On Fri, Feb 22, 2008 at 02:50:25PM -0500, Alexander Strange wrote: > > On Feb 22, 2008, at 2:26 PM, Michael Niedermayer wrote: > >> On Fri, Feb 22, 2008 at 01:56:07PM -0500, Alexander Strange wrote: >>> r12164 added sse2 PNG encoding under CONFIG_ENCODERS, but didn't check it >>> later on. >> >> the code seems to be used in the decoder >> >> [...] > > So it does. > > Index: libavcodec/i386/dsputil_mmx.c > =================================================================== > --- libavcodec/i386/dsputil_mmx.c (revision 12178) > +++ libavcodec/i386/dsputil_mmx.c (working copy) > @@ -1585,6 +1585,8 @@ > *left = src2[w-1]; > } > > +#endif // CONFIG_ENCODERS > + Why is there an #ifdef CONFIG_ENCODERS there at all? Its gccs job to omit unused static: <> | http://ffmpeg.org/pipermail/ffmpeg-devel/2008-February/045995.html | CC-MAIN-2016-30 | refinedweb | 106 | 73.68 |
table of contents
NAME¶nutscan_scan_nut - Scan network for available NUT services.
SYNOPSIS¶
#include <nut-scan.h>
nutscan_device_t * nutscan_scan_nut(const char * startIP, const char * stopIP, const char * port, long usec_timeout);
DESCRIPTION¶The nutscan_scan_nut() function try to detect available NUT services and their associated devices. It issues a NUT request on every IP ranging from startIP to stopIP. startIP is mandatory, stopIP is optional. Those IP may be either IPv4 or IPv6 addresses or host names.
You MUST call nutscan_init(3) before using this function.
A specific port number may be passed, or NULL to use the default NUT port.
This function waits up to usec_timeout microseconds before considering an IP address does not respond to NUT queries. | https://manpages.debian.org/testing/libnutscan-dev/nutscan_scan_nut.3.en.html | CC-MAIN-2021-25 | refinedweb | 116 | 59.09 |
In the BizForm module, when you select a specific BizForm and click on the Data tab, the UI is provided via an iFrame by the BizForm_Edit_Data.aspx page.
All the BizForm_Edit_Data.aspx web form has is a Web User Control, BizFormEditData.ascx.
When I look in the code-behind file, I see that in the Page_Load method there are three event handlers that are defined to "initialize" the unigrid. one of the moethods is OnExternalDataBound.
Nowhere in the code do I see a datasource explicitly defined for the UniGrid, and I'm not sure what the OnExternalDataBound method is triggered by -- it seems like something outside of the context of the code is defining the data for the UniGrid and binding it.
Where/how is this done?
There is some complex (compiled code I might add) that gets the data for that biz form dynamically and dynamically creates the columns and such. So unless you had the full source code version you might not know the full process.
In most of the controls you don't define a datasource, you define a CMS Class (namespace.class) and all the compiled code in the background handles the rest. There are instances when you can provide the datasource but not many.
Please, sign in to be able to submit a new answer. | https://devnet.kentico.com/questions/how-is-the-unigrid-in-bizformeditdata-ascx-binding-data | CC-MAIN-2018-30 | refinedweb | 220 | 71.85 |
a conversation I had with my Uncle John, a long haul truck driver who has witnessed a more than a few accidents due to fatigued drivers.
The post was really popular and a lot of readers got value out of it…
…but the method was not optimized for the Raspberry Pi!
Since then readers have been requesting me to write a followup blog post that covers the necessary optimizations to run the drowsiness detector on the Raspberry Pi.
I caught up with my Uncle John a few weeks ago and asked him what he would think of a small computer that could be mounted inside his truck cab to help determine if he was getting tired at the wheel.
He wasn’t crazy about the idea of being monitored by a camera his entire work day (and I don’t necessarily blame him either — I wouldn’t want to be monitored all the time either). But he did eventually concede that a device like this, and ideally less invasive, would certainly help avoid accidents due to fatigued drivers.
To learn more about these facial landmark optimizations and how to run our drowsiness detector on the Raspberry Pi, just keep reading!
Looking for the source code to this post?
Jump right to the downloads section.
Raspberry Pi: Facial landmarks + drowsiness detection with OpenCV and dlib
Today’s tutorial is broken into four parts:
- Discussing the tradeoffs between Haar cascades and HOG + Linear SVM detectors.
- Examining the TrafficHAT used to create the alarm that will sound if a driver/user gets tired.
- Implementing dlib facial landmark optimizations so we can deploy our drowsiness detector to the Raspberry Pi.
- Viewing the results of our optimized driver drowsiness detection algorithm on the Raspberry Pi.
Before we get started I would highly encourage you to read through my previous tutorial on Drowsiness detection with OpenCV.
While I’ll be reviewing the code in its entirety here, you should still read the previous post as I discuss the actual Eye Aspect Ratio (EAR) algorithm in more detail.
The EAR algorithm is responsible for detecting driver drowsiness.
Haar cascades: less accurate, but faster than HOG
The major optimization we need to run our driver drowsiness detection algorithm on the Raspberry Pi is to swap out the default dlib HOG + Linear SVM face detector and replace it with OpenCV’s Haar cascade face detector.
While HOG + Linear SVM detectors tend to be significantly more accurate than Haar cascades, the cascade method is also much faster than HOG + Linear SVM detection algorithms.
A complete review of both HOG + Linear SVM and Haar cascades work is outside the scope of this blog post, but I would encourage you to:
- Read this post on Histogram of Oriented Gradients and Object Detection where I discuss the pros and cons of HOG + Linear SVM and Haar cascades.
- Work through the PyImageSearch Gurus course where I demonstrate how to implement your own custom HOG + Linear SVM object detectors from scratch.
The Raspberry Pi TrafficHAT
In our previous tutorial on drowsiness detection I used my laptop to execute driver drowsiness detection code — this enabled me to:
- Ensure the drowsiness detection algorithm would run in real-time due to the faster hardware.
- Use the laptop speaker to sound an alarm by playing a .WAV file.
The Raspberry Pi does not have a speaker so we cannot play any loud alarms to wake up the driver…
…but the Raspberry Pi is a highly versatile piece of hardware that includes a large array of hardware add-ons.
One of my favorites is the TrafficHAT:
The TrafficHAT includes:
- Three LED lights
- A button
- A loud buzzer (which we’ll be using as our alarm)
This kit is an excellent starting point to get some exposure to GPIO. If you’re just getting started as well, be sure to take a look at the TrafficHat.
You don’t have to use the TrafficHAT of course; any other piece of hardware that emits a loud noise will do.
Another approach I like to do is just plug a 3.5mm audio cable in the audio jack, and then set up text to speech using espeak (a package available via apt-get ). Using this method you could have your Pi say “WAKEUP WAKEUP!” when you’re drowsy. I’ll leave this as an exercise for you to implement if you so choose.
However, for the sake of this tutorial I will be using the TrafficHAT. You can buy your own TrafficHAT here.
And from there you can install the required Python packages you need to use the TrafficHAT via pip . But first, ensure you’re in your appropriate virtual environment on your Pi. I have a thorough explanation on virtual environments on this previous post.
Here are the installation steps upon opening a terminal or SSH connection:
From there, if you want to check that everything is installed properly in your virtual environment you may run the Python interpreter directly:
Note: I’ve made the assumption that the virtual environment you are using already has the above packages installed in it. My cv virtual environment has NumPy, dlib, OpenCV, and imutils already installed, so by using pip to install the RPi.GPIO and gpiozero to install the respective GPIO packages, I’m able to access all six libraries from within the same environment. You may pip install each of the packages (except for OpenCV). To install an optimized OpenCV on your Raspberry Pi, then just follow this previous post. If you are having trouble getting dlib installed, please follow this guide.
The driver drowsiness detection algorithm is identical to the one we implemented in our previous tutorial.
To start, we will apply OpenCV’s Haar cascades to detect the face in an image, which boils down to finding the bounding box (x, y)-coordinates of the face in the frame.
Given the bounding box the face we can apply dlib’s facial landmark predictor to obtain 68 salient points used to localize the eyes, eyebrows, nose, mouth, and jawline:
As I discuss in this tutorial, dlib’s 68 facial landmarks are indexable which enables us to extract the various facial structures using simple Python array slices.
Given the facial landmarks associated with an eye, we can apply the Eye Aspect Ratio (EAR) algorithm which was introduced by Soukupová and Čech’s in their 2017 paper, Real-Time Eye Blink Detection suing Facial Landmarks:
Figure 3: Top-left: A visualization of eye landmarks when then the eye is open. Top-right: Eye landmarks when the eye is closed. Bottom: Plotting the eye aspect ratio over time. The dip in the eye aspect ratio indicates a blink (Image credit: Figure 1 of Soukupová and Čech).
On the top-left we have an eye that is fully open and the eye facial landmarks plotted. Then on the top-right we have an eye that is closed. The bottom then plots the eye aspect ratio over time. As we can see, the eye aspect ratio is constant (indicating that the eye is open), then rapidly drops to close to zero, then increases again, indicating a blink has taken place.
You can read more about the blink detection algorithm and the eye aspect ratio in this post dedicated to blink detection.
In our drowsiness detector case, we’ll be monitoring the eye aspect ratio to see if the value falls but does not increase again, thus implying that the driver/user has closed their eyes.
Once implemented, our algorithm will start by localizing the facial landmarks on extracting the eye regions:
We can then monitor the eye aspect ratio to determine if the eyes are closed:
And then finally raising an alarm if the eye aspect ratio is below a pre-defined threshold for a sufficiently long amount of time (indicating that the driver/user is tired):
In the next section, we’ll implement the optimized drowsiness detection algorithm detailed above on the Raspberry Pi using OpenCV, dlib, and Python.
A real-time drowsiness detector on the Raspberry Pi with OpenCV and dlib
Open up a new file in your favorite editor or IDE and name it pi_drowsiness_detection.py . From there, let’s get started coding:
Lines 1-9 handle our imports — make sure you have each of these installed in your virtual environment.
From there let’s define a distance function:
On Lines 11-14 we define a convenience function for calculating the Euclidean distance using NumPy. Euclidean is arguably the most well known and must used distance metric. The Euclidean distance is normally described as the distance between two points “as the crow flies”.
Now let’s define our Eye Aspect Ratio (EAR) function which is used to compute the ratio of distances between the vertical eye landmarks and the distances between the horizontal eye landmarks:
The return value will be approximately constant when the eye is open and will decrease towards zero during a blink. If the eye is closed, the eye aspect ratio will remain constant at a much smaller value.
From there, we need to parse our command line arguments:
We have defined two required arguments and one optional one on Lines 33-40:
- --cascade : The path to the Haar cascade XML file used for face detection.
- --shape-predictor : The path to the dlib facial landmark predictor file.
- --alarm : A boolean to indicate if the TrafficHat buzzer should be used when drowsiness is detected.
Both the --cascade and --shape-predictor files are available in the “Downloads” section at the end of the post.
If the --alarm flag is set, we’ll set up the TrafficHat:
As shown in Lines 43-46 if the argument supplied is greater than 0, we’ll import the TrafficHat function to handle our buzzer alarm.
Let’s also define a set of important configuration variables:
The two constants on Lines 52 and 53 define the EAR threshold and number of consecutive frames eyes must be closed to be considered drowsy, respectively.
Then we initialize the frame counter and a boolean for the alarm (Lines 57 and 58).
From there we’ll load our Haar cascade and facial landmark predictor files:
Line 64 differs from the face detector initialization from our previous post on drowsiness detection — here we use a faster detection algorithm (Haar cascades) while sacrificing accuracy. Haar cascades are faster than dlib’s face detector (which is HOG + Linear SVM-based) making it a great choice for the Raspberry Pi.
There are no changes to Line 65 where we load up dlib’s shape_predictor while providing the path to the file.
Next, we’ll initialize the indexes of the facial landmarks for each eye:
Here we supply array slice indexes in order to extract the eye regions from the set of facial landmarks.
We’re now ready to start our video stream thread:
If you are using the PiCamera module, be sure to comment out Line 74 and uncomment Line 75 to switch the video stream to the Raspberry Pi camera. Otherwise if you are using a USB camera, you can leave this unchanged.
We have one second sleep so the camera sensor can warm up.
From there let’s loop over the frames from the video stream:
The beginning of this loop should look familiar if you’ve read the previous post. We read a frame, resize it (for efficiency), and convert it to grayscale (Lines 83-85).
Then we detect faces in the grayscale image with our detector on Lines 88-90.
Now let’s loop over the detections:
Line 93 begins a lengthy for-loop which is broken down into several code blocks here. First we extract the coordinates and width + height of the rects detections. Then, on Lines 96 and 97 we construct a dlib rectangle object using the information extracted from the Haar cascade bounding box.
From there, we determine the facial landmarks for the face region (Line 102) and convert the facial landmark (x, y)-coordinates to a NumPy array.
Given our NumPy array, shape , we can extract each eye’s coordinates and compute the EAR:
Utilizing the indexes of the eye landmarks, we can slice the shape array to obtain the (x, y)-coordinates each eye (Lines 107 and 108).
We then calculate the EAR for each eye on Lines 109 and 110.
Soukupová and Čech recommend averaging both eye aspect ratios together to obtain a better estimation (Line 113).
This next block is strictly for visualization purposes:
We can visualize each of the eye regions on our frame by using cv2.drawContours and supplying the cv2.convexHull calculation of each eye (Lines 117-120). These few lines are great for debugging our script but aren’t necessary if you are making an embedded product with no screen.
From there, we will check our Eye Aspect Ratio ( ear ) and frame counter ( COUNTER ) to see if the eyes are closed, while sounding the alarm to alert the drowsy driver if needed:
On Line 124 we check the ear against the EYE_AR_THRESH — if it is less than the threshold (eyes are closed), we increment our COUNTER (Line 125) and subsequently check it to see if the eyes have been closed for enough consecutive frames to sound the alarm (Line 129).
If the alarm isn’t on, we turn it on for a few seconds to wake up the drowsy driver. This is accomplished on Lines 136-138.
Optionally (if you’re implementing this code with a screen), you can draw the alarm on the frame as I have done on Lines 141 and 142.
That brings us to the case where the ear wasn’t less than the EYE_AR_THRESH — in this case we reset our COUNTER to 0 and make sure our alarm is turned off (Lines 146-148).
We’re almost done — in our last code block we’ll draw the EAR on the frame , display the frame , and do some cleanup:
If you’re integrating with a screen or debugging you may wish to display the computed eye aspect ratio on the frame as I have done on Lines 153 and 154. The frame is displayed to the actual screen on Lines 157 and 158.
The program is stopped when the ‘q’ key is pressed on a keyboard (Lines 157 and 158).
You might be thinking, “I won’t have a keyboard hooked up in my car!” Well, if you’re debugging using your webcam and your computer at your desk, you certainly do. If you want to use the button on the TrafficHAT to turn on/off the drowsiness detection algorithm, that is perfectly fine — the first reader to post the solution in the comments to using the button to turn on and off the drowsiness detector with the Pi deserves an ice cold craft beer or a hot artisan coffee.
Finally, we clean up by closing any open windows and stopping the video stream (Lines 165 and 166).
Drowsiness detection results
To run this program on your own Raspberry Pi, be sure to use the “Downloads” section at the bottom of this post to grab the source code, face detection Haar cascade, and dlib facial landmark detector.
I didn’t have enough time to wire everything up in my car and record the screen while as I did previously. It would have been quite challenging to record the Raspberry Pi screen while driving as well.
Instead, I’ll demonstrate at my desk — you can then take this implementation and use it inside your own car for drowsiness detection as you see fit.
You can see an image of my setup below:
To run the program, simply execute the following command:
I have included a video of myself demoing the real-time drowsiness detector on the Raspberry Pi below:
Our Raspberry Pi 3 is able to accurately determine if I’m getting “drowsy”. We were able to accomplish this using our optimized code.
Disclaimer: I do not advise that you rely upon the hobbyist Raspberry Pi and this code to keep you awake at the wheel if you are in fact drowsy while driving. The best thing to do is to pull over and rest; walk around; or have a coffee/soda. Have fun with this project and show it off to your friends, but do not risk your life or that of others.
How do I run this program automatically when the Pi boots up?
This is a common question I receive. I have a blog post covering the answer here: Running a Python + OpenCV script on reboot.
Summary
In today’s blog post, we learned how to optimize facial landmarks on the Raspberry Pi by swapping out a HOG + Linear SVM-based face detector for a Haar cascade.
Haar cascades, while less accurate, are significantly faster than HOG + Linear SVM detectors.
Given the detections from the Haar cascade we were able to construct a dlib.rectangle object corresponding to the bounding box (x, y)-coordinates in the image. This object was fed into dlib’s facial landmark predictor which in turn gives us the set of localized facial landmarks on the face. From there, we applied the same algorithm we used in our previous post to detect drowsiness in a video stream.
I hope you enjoyed this tutorial!
To be notified when new blog posts are published here on the PyImageSearch blog, be sure to enter your email address in the form below — I’ll be sure to notify you when new content is released!
Great article! Do you plan an article (or series) on low light environment face/eye blink detection. Followed your guide recently, but really excited to know how to raise the detection quality on low-light environments/low quality video stream.
It’s always easier to write code for (reliable) computer vision algorithms for higher quality video streams than try to write code that compensates for a poor environment. If you’re running into situations where you are considering writing code for a poor environment I would encourage you to first examine the environment and see if you can update to make it higher quality.
Due to business requirements I can’t force our clients to shot themselves only in good-to-process conditions. They could be using our software anywhere they want, so… I’d like to read of any approaches available to solve this problem. Just as an idea for you for future publications.
Other readers have suggested infrared cameras and infrared lights. I would expect that solution to solve the problem when it is dark outside. There are other “poor conditions” such as reflection and glare which you would need to overcome too. This blog post will get you started but it isn’t intended to be a solution that you can sell.
I would suggest taking a look at iPhone X, Intel Realsense F200/R200, Logitech C922 and the Structure sensor (structure.io) to name a few. Also take a look at how Google Tango approaches depth for AR. I personally think everything (apps and sensors) are moving to 3D now…
hi,I am chinese,I like your essay.
Do you know TrafficHAT,Buy link invalidation.
Can you use raspberry pi to write an article about face recognition using tensorflow, opencv, Dlib?
sir, im getting problem with — cascade path… plz resolve issue asap.
It sounds like you’re struggling with command line arguments. Make sure you read this post first.
Raspberry Pi has “night vision” camera boards. They have IR LED spotlights and some of the cameras come without IR filter. Your eyes are not able to see the infrared light, but the camera is. Add light to low light and create higher quality video stream…
There is also IR webcams available and it is possible to use infrared light with some of the standard non IR webcam. Most of the webcams have IR blocking filter, but some of them doesn’t filter properly. (And it is possible to remove the filters in some cases. Use google for this.)
Maybe this could help you?
(The articles are excellent! Thank you Adrian!)
WoooooooooooooooooooooooooooooooW
That is great
now this is what i need
very very thank you Adrian
Thanks Fariborz, I’m glad you enjoyed the tutorial 🙂
Hi Dr. Rosebrock, great article as usual! Thank you for the good consistent content. I’m learning a lot 🙂
Thank you 🙂
Hi Adrian,
Thanks for posting this.
In this post from May ’17 about running dlib on a raspberry pi, you mention that a Raspberry Pi3 is not fast enough to do dlib’s face landmark detection in realtime.
Since the drowsiness detection also uses dlib’s face landmarks, does it have similar performance issues as you mention in your older post? Or have you figured out some optimizations for RPi3 to improve performance?
Thanks,
Rohit
Hi Rohit — please see the section entitled “Haar cascades: less accurate, but faster than HOG”. This is where our big speedup comes from.
Logitech webcam is better than Pi camera?
It depends on how you define “better”. What is your use case? How do you intend on deploying it? Both cameras can be good for different reasons. The Raspberry Pi camera module is cheaper but the Logitech C920 is technically “better” for many uses. It is nice being able to connect the camera directly to the Pi though.
oh Come on man i just wrote this idea two weeks ago in C++
obviously ideas could go beyond pacific through continents
but good news for me is i optimized it with an awesome idea and now i can process drowsiness with almost 30 frame per second from 1 megapixel image stream in Raspberry Pi
WOOOOW…..
I beg u Dr. Rosebrock do not publish such ideas, image processing fans and researchers will get it with just a hint
Hey Arash — I actually wrote the original drowsiness detection tutorial way back in May. Secondly, I tend to write blog posts 2-3 weeks ahead of time before they are actually published. I’m not sure what your point is — you would prefer I note publish tutorials?
Where can i see your blogs?
Hi Arash,
Can you share the source codes ? is this c++ using dlib in Rapberry Pi and 30 fps ? is it around 1280×960 . resolution ?
I would love to discuss you if you have contact address
Best..
How to download updated imutils?
I would suggest using “pip”:
$ pip install --upgrade imutils
If you are using a Python virtual environment please make sure you activate it before installing/upgrading.
Hey Adrian,
Thanks for sharing!
As always a great job !!
I tested with webcom and verified a great performance in the identification of drowsiness, with a processing load of 70%. Perhaps there is something that can be improved to reduce PLOAD, perhaps by altering Haarcascade, perhaps by using the one haarcascade_eye.xml or similar, targeting only the eye area. I wanted you to share your opinion with us. Can you comment on the subject?
Thanks for all help, Adrian
In order to apply drowsiness detection we need to detect the entire face — this enables us to localize the eyes. We could use a Haar cascade to detect eyes but the problem is that we need to train a facial landmark detector for just the eyes. That wouldn’t do much to improve processing speed.
Hi Adrian,
I’m impressed with the tutorial!
Please let me know what Operating System used in the Raspberry Pi 3.
Hi Raghu — Raspbian is the official operating system and the one used. You can download it here.
Hey Adrian,
First thank you for sharing this great edition !!
Doing some tests I found the following error in the code, when I used the “PICamera”, I got the following TypeError:
from:
vs = VideoStream(usePicamera=True).start()
to:
vs = VideoStream(usePiCamera=True).start()
This corrects the following failure:
vs = VideoStream(usePicamera=True).start()
TYpeError: __init__() got an unexpected keyword argument ‘usePicamera’
Thanks,
Marvin
Thank you Marvin — you are correct. I’ve updated the post, and I’ll update the download soon. Thanks for bringing this to my attention.
I have now updated the code download as well. Thanks again!
Hey!
So there’s something which bothers me here:
Your original article used something like HOG+SVM and a sliding window for detection.
I got to say, that face detector that you have provided does work most of the time (~75%).
However, doesn’t RCNN (or faster RCNN etc,whatever you get the point) just work better than pre-deep learning techniques? I mean,that’s what Justin from stanford claims ().
Is that really the case? If so, when should I NOT prefer RCNN & Why?
RCNN-based methods will be more accurate than both Haar cascades and/or HOG + Linear SVM (provided the network is properly trained and deployed). The problem can be speed — we need to achieve that balance on the Raspberry Pi.
Cool, thx for the answer.
Just another thing: Does RCNN require more training data as well?
I mean, it requires a bounding box for each object for each picture.
HOG+SVM requires negative and positive examples, and for the false positives, we need to manually tell the learning algo that those are false positives.
So,in your experience, which learning algo requires more training data to work decently?
It will vary on a dataset to dataset basis, but in general, you can assume that your CNN will need more example images.
Hello Adrian,
Just started Image processing and sounds like fun but really tired of installing libraries, I have been “setting” up my pi for about a week now.
Stuck on pip install scipy.
running setup.py bdist_wheel for scipy … takes forever.
Any tips?
Thanks
Hi Muhammad — yes setting up the Pi can be quite frustrating. For some of the PIP installs you must be patient and let the Pi finish. If you’re interested in a pre-configured and pre-installed Raspbian image, it comes with the Quickstart and Hardcopy Bundles of my book, Practical Python and OpenCV + Case Studies.
Hi Adrian,
I think it is now time to use cnn based algorithms for face detection part. Is it slow?! not anymore. You can make an awesome binarization model method tutorial in your website which face detector part would be more accurate and faster. Let me know if you need help.
Best and Greeting from Venice ICCV17,
Hi Majid — thanks for the suggestion. Enjoy your conference!
Hi Majid,
I am a university student (not in computer field) and I have interest in face detection with many methods but I have a less information about cnn-based methods.Would you mind to show me the name of the paper about cnn-based for face detection in ICCV17 (or maybe not in that conference) or relate paper in this topic.
Hi Adrian
Recently I came to know about thin client. Can you please tell me the difference between thin client and a Pi with a SD card. Is there any additional memory support?. Is it possible to connect a thin client with a portable display?(7”). Please reply.
Hi Suganya, see this information about thin clients. Basically thin clients rely on a server for storage and applications. You don’t store or process much locally on a thin client. A Raspberry Pi is not a thin client, but I suppose you could make it into one. Raspberry Pis (at least the Raspbian OS), allow for processing and storage on the device — it’s a fully functional small computer. Yes, you can attach a display to a thin client.
Hello Adrian,
I’m a high school student and I would like to reproduce your project for my science class and try some variables of my own. I wonder what camera and other equipments did you use for this experiment. Would it be possible to specify?
Thank you in advance.
If you want, I can share with you the results of my experiment at the end of my project.
best regards
Fred
Hi Fred, thanks for the comment. It’s great to hear you are interested in computer vision! I was in high school as well when I first got into image processing.
The camera for this tutorial doesn’t matter a whole lot. I like the Raspberry Pi camera module but it might be easier for you to use the Logitech C920 which is plug-and-play compatible with the Raspberry Pi.
For this specific blog post I used the Logitech C920.
Thank you so much,
I’ll give it a try and let you know how far I can get.
Cheers
Fred
Hi Adrian,
in which folded should I extract the zip file?
Thank you for you help
I think I got everything else ready now for my testing.
Thank you so much
Fred
It doesn’t matter where you download and extract the .zip file. Extract it, change directory into it, and execute the script.
Hi Adrian, i run the code, but its very slowly. What is the problem ?
Hi Hien — what type of system are you executing the code on? What are the specs of the machine?
Hi adrian. If i use a night vision cam, do i need change the code?
You might have to. I would verify that faces can still be detected and the facial landmarks localized when switching over to the night vision cam.
Hello Adrian. I am a student and want to make this as my project .Traffic hat is not available so I’m planning on using the 3.5mm audio jack on playing the alarm. Im really a newbie in image processing. The part of the codes in replacing the alarm really confuses me. Can you help me out in replacing the codes instead of using traffic hat? Thank you.
Hi Liz — congrats on working on your project, that’s fantastic. I haven’t used the audio jack or associated audio libraries on a Raspberry Pi so unfortunately I can’t give any direct advice. But in general you’ll need to remove all TrafficHat imports and then play your audio file on Lines 136-138.
If you’re new to computer vision and OpenCV I would suggest you work through Practical Python and OpenCV. I created this book to help beginners and it would certainly help you get quickly up to speed and complete your project.
Hi, dr. ROSEBROCK. your work is very good and thank you for your sharing. I just started raspberry. I installed dlib and OpenCV. I ran those codes on raspberry. How can I just run this project when the Raspberry is opened? How can I add .and .xml and .dat files to code? thanks in advance
Hello, thanks for the comment. Can you be a bit more specific when you say “run the project when Raspberry is opened”? Are you referring to running the project on reboot? Secondly, I’m not sure what you mean by “add .xml and .dat files to code”? You are trying to hardcode the paths to the files in the code?
thank you dr. your reply made me very happy:
yes, I want to run the project during reboot. I would also like to add the paths of xml and dat files to hardcoding. Finally I use a buzzer instead of traffichat and I did not get the sound output. my goal is just to learn something … thanks …
If you are using a buzzer you should read up on GPIO and the Raspberry Pi. You should also consult the manual/documentation for your particular buzzer. You can hardcode the paths to the XML file if you so wish. Just create a variable that points to the paths. Or you can execute the script at boot and include the full paths to the XML files as command line arguments. Either method will work. For more information on running a script on reboot, take a look at this blog post.
Dr Rosebrock, thank you so much. I ran the project. Your article was very useful.
( for buzzer…… buzzer + pin = raspbbery 29 pin, buzzer – pin = raspberry 25 pin gnd). I can send a video for work ..
Yorulmaz can you please send me the code with this buzzer implementation.
Swathi can you please send me the buzzer implemented code.
Would testing for a yawn follow a similar approach??? Thanks.
Yes, monitoring the aspect ratio of the mouth would be a reasonable method to detect a yawn.
The only problem is occlusion (when the hand moves in front of the mouth) or if the user is singing a song. I think one might need to use a deep learning training and classification approach. Thoughts?
Deep learning might be helpful but it could also be overkill. If a hand, coffee cup, or breakfast sandwich moves in front of the mouth, I’m not sure that matters provided it’s only an occlusion for a short period of time. I doubt many people yawn once and then immediately fall asleep unless they have a specific condition. A more robust drowsiness detector should involve sensor fusion, such as body temperate, heart rate, oxygen levels, etc.
this error is shown when i run it, please help… i did not change anything from the code.
File “pi_detect_drowsiness.py”, line 145
cv2.putText(frame, “DROWSINESS ALERT!”, (10, 30),
^
IndentationError: expected an indented block
Make sure you use the “Downloads” section of this blog post to download the source code. It looks like you formatted the code incorrectly when copying and pasting.
Hi Adrian,
I’m impressed with your Drowsiness Detection algorithm for Raspberry Pi.
why don’t you develop the algorithm for iOS and Android phones so that it would reduce the cost of buying Raspberry Pi.
what is the dlib face landmarks detection speed on raspberry pi when people number is big(about 10)?
The facial landmark detector is extremely fast, it’s the face detection that tends to be slow. It really depends on what your goal is. Are you trying to apply drowsiness detection to all ten people in the input frame?
Hi Dr Rosebrock.
I have a question. As I reviewed your code in my raspberry and more or less there are 5 frames per second. And in your video you can see the photoprograms much faster. Maybe there is some way to take more frames per second?
Just to clarify — did you use my code exactly (downloaded via the “Downloads” form of this blog post)? Did you make any modifications? It would also be helpful to know which model of the Raspberry Pi you are using.
I tried to use the same code in the above, but i have the problem in installing the dlib in my windows.can you please tell how to install that in windows.I download dlib package directly from net but its not working.
Hi Adrian, Thank you very much.
I run the code which download from this blog on raspberry pi3 model B(raspbian stretch), but its very slowly. What is the problem ?
I followed blogs to install opencv3 and dlib on my raspberry pi3 ( optimizing opencv on raspberry pi and install dlib ( the easy, complete guide ) ).
Can you elaborate on what you mean by “slowly”? Are you using a Raspberry Pi camera module or a USB camera? Additionally, how large are the input frames that you are processing? Make sure you are using the Haar cascades for face detection rather than the HOG + Linear SVM face detector provided by dlib. This will give you additional speed as we do in this blog post.
Hi Adrian,
Thanks for your sharing. I have the same problem as zjfsharp. I did exactly as the post (with optimizing opencv installed )and ran the downloaded and unchange code in my raspberry pi3 model B successfully. But the fps is around 4. The operating system is raspbian stretch lite with GUI. while runing the code the, the cpu runs at 600MHz(half of 1.2GHz). The memory usage is about 40 percent. The result is far less smooth as your video shown above.
Hey Charlie — just to clarify, how are you accessing your Raspberry Pi? Via SSH or VNC? Or via a standard keyboard + HDMI monitor setup? Additionally, are you using a USB webcam or a Raspberry Pi camera module?
I’m using a usbcam(logitech em2500) via a standard Keyboard+HDMI setup to access. Low fps seems has nothing to with cpu frequency(boosted to 1.2GHz), cpu and memory usage and power supply(5v,2A).
Thanks for sharing the hardware setup, Charlie. Are you using Python 2.7 or Python 3?
Python 3. I’m still stuck here. Do you have any idea? Thanks for your reply.
I know Python 3 handles threading and queuing slightly different than Python 2. Would you be able to try Python 2 and see if you have the same results?
hello did you solve your problem charlie ?
Hello Adrian,
OS is Rasbian Stretch hw is raspberry pi 2.
OpenCV and all other imports are OK
but the result is “AttributeError: ‘NoneType’ object has no attribute ‘shape'” 🙂
any comments?
thanks in advance
Emre
$ python pi_detect_drowsiness.py –cascade haarcascade_frontalface_default.xml –shape-predictor shape_predictor_68_face_landmarks.dat –alarm 1
[INFO] using TrafficHat alarm…
[INFO] loading facial landmark predictor…
[INFO] starting video stream thread…
Traceback (most recent call last):
File “pi_detect_drowsiness.py”, line 88, in
frame = imutils.resize(frame, width=450)
…
(h, w) = image.shape[:2]
AttributeError: ‘NoneType’ object has no attribute ‘shape’
If you are getting a “NoneType” error than OpenCV cannot read the frame from your Raspberry Pi camera module or USB webcam. Double-check that OpenCV can access your Raspberry Pi camera by following this post. Additionally, you should read up on NoneType errors and how to debug them here.
installing cam driver solved my problem 🙂
sudo modprobe bcm2835-v4l2
thanks a lot
hi Adrian, I have confirmed that the camera captures video and photos as I followed ur previous camera setup tutorial.
however, I’m having exactly the same error with Emre. Im using raspberry pi camera for my project. what should I do?. any other idea?
emre’s solution also worked for me. now working alhamdulillah
additionally;
using “Raspberry Pi camera module” not a USB one.
This is awesome. Can I execute this program on boot though? I tried using your tutorial on crontab and instead of using “python pi_reboot_alarm.py” I replaced it with this
python pi_detect_drowsiness.py –cascade haarcascade_frontalface_default.xml \
–shape-predictor shape_predictor_68_face_landmarks.dat –alarm 1
But it did not work at all. Can you help me out?
See this tutorial on running a script on reboot.
You’ll either need to access your Python virtual environment and then execute the script (best accomplished via a shell script) or supply the full path to the Python binary (which I think is a bit easier).
hello adrian …..
iam a having a doubt that is iam using a rasberry pie controller iam in coded in the python
whether i need the laptop needs to be attached with the module
else the coded program can uploaded to the controller and detached the laptop
can you explain me
iam a very beginner
If you’re a beginner I would suggest coding directly on the Raspberry Pi, that way you won’t be confused on which system the code is executing on.
can i put the codes inside the python shell in the virtual environment? i am having a big trouble on how to start scripting and what ide i should use and also i dont know how to run it. i am very sorry if i look dumb but i am really new to this kind of tech can someone help me?
Instead of trying to use the Python shell or IDE to recommend the code, simply open up a terminal, access your Python virtual environment via the “workon” command execute the script from the terminal. There is no need to launch a shell or IDE.
sir,
i am facing this error.please help.
[INFO] loading facial landmark predictor…
[INFO] starting video stream thread…
…
from picamera.array import PiRGBArray
ImportError: No module named ‘picamera’
You need to install the picamera library with NumPy array support:
$ pip install "picamera[array]"
Im facing issue with the buzzer, can u help me to fix the buzzer and also share me a code, which runs with buzzer. it would be a great help. Thank you.
What is the exact error/issue you are having?
Hi Adrian,
I’m unable to implement the buzzer to my code. can you please share a code with a buzzer, it would be really helpful. Espeak is not working properly. I’m unable to hear the sound from the buzzer, if u could provide a correct code, it would be really helpful. Thank you.
Also need to know which gpio pin i should connect the buzzer for for I/O Pin.
i have used this code with logitech c270 but processing is slow on raspberry pi 3B model
can you help to increase the speed
the delay is of 1.5 – 2 seconds
same code works on laptop very seamlessly .
I HAVE USED IT WITH PYTHON 2
How are you access your Raspberry Pi? Via HDMI monitor and keyboard? Over SSH? Over VNC? It sounds like you may be using SSH or VNC.
Hi Adrian , thank you for this tutorial it’s really helpful !
i want to extract x and y for a specific point from the face ,for example eye[2], how can i do that please? thank in advance 🙂
Take a look at this post as it demonstrates how to extract the various facial features. Once you have them you can extract individual (x, y)-coordinates as well.
Hello sir, I have installed all the dependencies and opencv properly but can not run the file from command prompt. Here is the path that I have included
ap.add_argument(“-c”, “-/home/pi/Downloads/pi-drowsiness-detection/haarcascade_frontalface_default.xml”, required=True,
help = “/home/pi/Downloads/pi-drowsiness-detection”)
ap.add_argument(“-p”, “-/home/pi/Downloads/pi-drowsiness-detection/shape_predictor_68_face_landmarks.dat”, required=True,
help=”/home/pi/Downloads/pi-drowsiness-detection”)
Is it ok?
while I go for running the file from cmd shell it gives me so such file or directory error.
The code itself doe snot need to be modified. You can resolve your error by reading up on command line arguments.
could you write a post about gaze tracking
Sure, I will consider this for a future tutorial.
i am getting black frame screen on executing the program.
pi camera led is on. But frame screen is blackout.
i am using -picamera module, it’s high resolution picamera module
This is likely a firmware and/or picamera version issue. I discuss how to resolve the problem in this blog post.
Thanks
Hi Adrian, i am wondering can be use the Dlib’s 5-point Face Landmark Detector here instead of 68 point? If so, what are the necessary changes i have to make here in this code for running it on Pi. Can you please specify or advice for the same? Also what do you think how much does it help in improving the performance in terms of speed and accuracy and memory size?
No, you cannot use the 5-point model for drowsiness detection (at least in terms of this code). I discuss why you cannot use the 5-point model inside the 5-point facial landmark post. Be sure to give it a read.
good morning i have try to simplement this algorithme un python 3.6.4
line 50 ans 66 are giving error ans there are some library that do not work.
what can i do to solve thé problèmes.
tanks
What are the exact errors that you are getting? Keep in mind that if I, or other PyImageSearch readers, do not know what problems or errors you are having we will be unable to help.
Hi Adrian
When I run your code via python 3 on terminal, it gives me an attribute error on line 69.
predictor = dlib.shape_predictor(args[“shape-predictor”])
AttributeError: ‘module’ object has no attribute ‘shape_predictor’
Could you please help me out ?
Hey Mario — what version of dlib are you using?
Hi Adrian
I installed OpenCV and all the requirements from scratch by following your tutorial again, and now everything works like a charm. Thanks anyway!
Awesome, I’m glad to hear it Mario! Congrats on getting OpenCV installed!
i adrian i have same error
Is there a reason you chose to use VideoSteam over VideoCapture to get frames?
I’m trying to add code to record the video to a file using a VideoWriter object. I am having trouble getting the video to record at a regular speed video though, it seems to be in slow mode. I think it has to do with the VideoStream object getting frames. I tried recording with the VideoCapture object (in a test script) and it runs much faster. Any advice would be great! Thanks so much!
Hi Eric.
VideoStreamis part of my own threaded implementation in
imutils. I also have
fileVideoStreamwhich uses
VideoCapture. Check out the source here on GitHub.
Hi Adrian,
When i m running this code on my raspberry pi3, th results which i m getting are with delay pf 8 to 10 seconds. Can you please suggest something to make this faster??
Thanks
Dlib now has a 5-point facial landmark detector that will be significantly faster than the 68-point one. Please see this blog post introducing the 5-point detector.
But as you have mentioned in the blog, in 5-point facial landmark detector, we get only two points for eyes and cannot calculate EAR…
I have to detect drowsiness in RPi 3..Please help me to find as solution..
Thanks!!
You cannot use the 5-point facial landmark detector to compute EAR. The 5-point facial landmark detector cannot be used for drowsiness detection.
Hey Adrian, I’ve followed your guide, and everything has been finished except sound. As I don’t use TrafficHat, I think I should change the code in line 40-44,
# check to see if we are using GPIO/TrafficHat as an alarm
if args[“alarm”] > 0:
from gpiozero import TrafficHat
th = TrafficHat()
print(“[INFO] using TrafficHat alarm…”)
I want to make noise from 3.5mm speaker output. Should I change or delete the log?
I really appreciate your help.
Best Regards, Sa-rang.
If you have no intention of using TrafficHat then delete all TrafficHat code from the file.
hello adrian, i have the same problem as Charlie, i have raspberry pi 3 with a webcam longitech c920, and i run it with python 3 but it gives me 5 fps
What about Python 2.7? Does it give you the same FPS as well?
i need source code for raspberry pi
You can use the “Downloads” section of this blog post to download the source code.
may i ask what are the limitations of this device?
and what python version are you using here?
I’m not sure what you mean by “limitations”, you’ll need to be more specific as I don’t know if you’re referring to computational limitations, deployment, etc. To address your second question, I used Python 3 but this code will also work with Python 2.7.
i mean like how about in night time the light is very minimal does it able to detect the eyes and perform its function? And is the design of the device is not blocking the view of the driver? Thanks sir
Provided you can detect the face and facial landmarks this method will work. If you cannot detect the face, such as if the face is obscured, it will not work.
hi
is this possible drowsiness alert can be send to mobile r web-page
Technically yes, but you would need to modify the code to upload the alert to a web server first. Again, 100% possible but you would need to decide which web service you are using and then read the corresponding documentation.
is it okay to use python latest version here (python 3.7? )
Yes, it should be okay.
Sir if i used the Raspberry Pi model 3 b+ here, is there any changes in terms of peformance?
The Pi 3B+ is slightly faster so you will see a small increase in speed but not a massive amount.
Sir I need to implement this in dark as my project. Can you help me out with the code changes as I am not able to get the changes done.
It’s unfortunately not as simple as changing a few lines of code here and there. What have you already done to get this project working in low light or no light conditions? What camera are you using?
Thanks Adrian. I really enjoy this tutorial.
1.Can i upload this system’s information to a database using the raspberry pi. if yes, which database do you recommend? (firebase, sql or…?)
2. If i intend to use a buzzer instead of the traffic hat, would there be significant difference in the code you provided?
1. Exactly which database you use is really dependent on your project specifications. You should do your own research there. But typically a good first start is a SQL-based database and then go from there.
2. No, there would not be a significant change. Just swap out the TrafficHat code for the GPIO code specific to your buzzer.
Hii….
I am new to image processing and python as well…can u give detailed information regarding installations to carry out this project?
Thanks in advance.
You should refer to one of my OpenCV install guides to help get you started.
Good Day Doctor Adrian!
Your project is amazing! I would like to ask about if you had already tried using IR camera for low light conditions for detecting drowsiness? Your reply will be highly appreciated. How about sir using PI Noir Camera?
Thanks and Regards
Hey there Rendhel, I have not tried the code directly with a Pi Noir camera.
hi adrian, Im new to IOT. I need ur clarifications. As for the traffic hat and the raspberry PI 3 , are they 2 different things or the same( PI combined with traffci hat).? u made them sound like 2 but according to your image its 1? so which is which?
The TrafficHat is a component that connects to the Raspberry Pi itself. They are two different pieces of hardware that connect together.
hi adrian,
thanks for taking ur time to reply to us
if i plan to use a night vision camera for this program, does anything of the code needs to be modified or can i use the same code to detect faces in low light/darkness?
Potentially, but I would start by trying with the night vision camera first before you plan on making any changes.
hi adrian.
is it possible to control the volume of the trafficHat buzzer.? as in, start from low to high according to the frequency of eye closure. in other words the deeper the sleep, the louder the sound.
pls reply sir
thanks
As far as I know it’s not but you should reach out to the creators of the TrafficHat to verify.
Hi Adrian!
Great work as you do!
But I have the same problem with Charlie.
I’m using a usbcam (Logitech c920) via a standard Keyboard+HDMI setup to access. Unfortunately, when I ran this script by using Haar cascade and 68 facial landmarks with python2.7 I got a bad performance which fps is low and the speed of video stream is slow. How can I get a better performance?
Thanks in advance.
Hey Tim — I haven’t been able to replicate the problem that both you and Charlie have had, unfortunately. Try to debug the issue by writing a separate Python script that only pulls frames from your camera and display them to your screen. Is the lag as bad? If so, probably an OpenCV or hardware issue. If not, then you can further debug which part of the code is really slowing you down.
i want to add a condition to this code that if no eyes detected it should also start alarm like driver is sleeping and fall aside from camera i want to apply if condition so where should i apply if condition and on which value plz help
I would instead modify the code that if “no face is detected for N frames”, where N is a value you define, then you sound the alarm.
using the same system, is it possible to emit vibration when drowsiness is detected.? what extra, do i need to attach?
You would need a hat for your Pi that has a vibration functionality.
Sir Adrian,
Does the camera you use cuts infrared? I am planning to use NoIR camera so it can be applied in low light condition. Thanks for the reply.
Regards
No, I used a standard USB camera for this project but you could use a NoIR camera if you wished.
will there be any to change in your set up sir Adrian?
I would suggest you try and see. It’s nearly impossible for me to predict without seeing your actual environment and where you intend on deploying it. The best way to learn is to learn by doing — it is now your turn 🙂
Hi Adrian!
Great Work, I run the project, it’s working fine but, I don’t have the TrafficHat. instead of using TrafficHat I want to use directly gpio to operate a relay module. please suggest me in which section of the code should i change. and whice code should i put?
thanks
Anywhere you see TrafficHat code you’ll want to swap that out for your GPIO code. Exactly what that code looks like is 100% dependent on which relay module you’re using, which GPIO pins you’re using, etc.
Hi. I just read this article and I would like to know if it would be possible to create a program that would let the rpi automatically select between the Rpi cam or webcam if the two of them are simultaneously connected on the Raspberry pi, depending on which of the two cameras detect a face.
Im trying to create a drowsy system like yours only with multiple cameras attached(probably 2 or 3). And the cameras would only start capturing depending on which of them detects a face
It’s absolutely possible. Start by reading this tutorial on multiple cameras with the Pi. Each frame will need to be read and face detection applied. If a face is found, hand off the face to a separate process to perform recognition then highlight that show the stream of that camera on your desktop.
Hi. I tried using three cameras.I used if-else statements to interchange between which cameras are detecting a face and then lock on that face as long as that specific camera is still detecting a face. I ran dlib facial detection on each cameras frames. However, I noticed a decrease in speed of the frames captured. The alarm triggers a little bit later than usual after detecting closed eyes. Is it because of my code or because of the facial detection function running on 3 different camera frames? I’d like to know your thought about it.
You’re running face detection + recognition on 3 separate cameras? If so, that’s the issue. The Raspberry Pi just isn’t powerful enough for that.
When you said from your previous comment that what I want to do is absolutely possible, does it include the slowing down of the frame capture? Or what you had in mind doesn’t really follow what i’m doing with the code?
I would just like to know if there is a way to optimize my code to be able to use three separate camera to capture the drivers face without really slowing the Rpi very much
Hey Pacquier — I would suggest keeping an eye on the PyImageSearch blog for my upcoming Computer Vision + Raspberry Pi book. I’ll be covering how to optimize the Pi fo computer vision applications covering both the hardware and software side of things. It’s too much for me to address in a single comment on this post so I hope you’ll take a look at the book. I’ll be sharing more details soon.
Hi Adrain
I am facing two error in drowsiness detection program using facial landmarks
1) Import error imutils
No such module named imutils
2) Import error dlib
No such module named dlib
I have installed both dlib and imutils via pip install on virtual envoirnment
You may be forgetting to access your Python virtual environment before executing the script:
$ workon your_env_name
Hi Adrian, i have a fuzzy error can you help me please:
…
ImportError: No module named imutils.video
You need to install the “imutils” library:
$ pip install imutils
Hi Adrian,I know that the Raspberry Pi can’t do very good real-time, can you recommend a development board for better real-time performance?
It actually depends on what you’re trying to do. For some applications of computer vision the Raspberry Pi can run in real-time. And in other cases you should just use the Movidius NCS to speed it up. Otherwise I recommend the Jetson TX2 if you’re interested in embedded deep learning.
hy sir can it detect eye at night time or when person wearing eye site glasses?
No, this method is intended for use when you can clearly detect the eye regions of the user.
if we use night vision camera then its possible to do it at night?
Sir Adrian,
Thank you for your amazing introduction and idea, I’ve set my pi and run the code you provided. However the IDE says that is an error :
pi_detect_drowsiness.py: error: the following arguments are required: -c/–cascade, -p/–shape-predictor
It maybe a easily-dealed problem but I just have no idea to fix it.
Would you plz help me?
Once again, Thanks a lot !
Sir Adrian,
I found that the code should be run on the shell ,and it says
ImportError: No module named imutils.video
But I have intall imutils , how’s this happen?
According to your error it sounds like imutils it not actually installed. You can install it via:
$ pip install imutils
If you’re new to command line arguments and argparse, that’s okay, but you need to read this tutorial first. Once you read the guide you will understand how to supply the proper command line arguments to the script.
Hi Adrian, I’m from the Philippines and i love your work on this. But i need help on how to sound the alarm using speakers via the audio jack since traffic hat isn’t available here in our country. Thanks!
I demonstrate how to do use a speaker with the Raspberry Pi in this guide.
Hi Adrian, Amazing article!
If i want to use pi camera, do i just only comment out Line 74 and uncomment Line 75 to switch the video stream to the Raspberry Pi camera?
Also, if i use the infrared camera, can i detect the drowsy at night? Thanks:)
You are correct in both counts. An infrared camera will help with the drowsiness detection at night.
hi Adrian, the alarm does not sound accurately upon eye closure. sometimes I close my eyes and the alarm is not sounded(detected). like….. the accuracy is very low. what should I do to make real time?
Hi Sir Adrian,
Thank you for your project. May I ask how do you power up your raspberry pi in your car? If by powerbank, what brand of powerbank do you used? Reply is very much appreciated.
Thanks!
Hello Sir,
I need to know if this project can be implemented on Raspberry pi zero?(1ghz Processor & 512mb ram?)
The Raspberry Pi Zero will unfortunately be too slow for this project. I would highly recommend you use a Pi 3.
how to connect raspberry-pi to laptop sir. please reply soon
Unless I’m misunderstanding your question, typically we just SSH into our Raspberry Pi via a laptop/desktop:
$ ssh pi@your_ip_address
I am getting a error when we run your code sir.
It is no module named cv2. But we have installed opencv using your tutorial. What should we do to clear this error sir.
Unfortunately it sounds like you do not have OpenCV properly installed. You should refer to my OpenCV install guides. Which one did you follow? Make sure you refer to the “FAQ” section at the bottom of each post which explains common errors such as yours.
It is not working when i put glasses so kindly tell what i can do ?
This method will not work reliably with glasse.
Hello Sir Adrian, can i get the full code for this project?.
You can use the “Downloads” section of this post to download the source code.
Hi. Thanks to this post, I am the student who completes the detection of the eyes. Thank you first.
Can I ask you a few questions?
I am currently using Raspberry Pie 3b + and pi camera.
First, there is source code using Haar cascades. Is there source code using HOG and LInear SVM? I’m having a problem with accuracy.
Secondly, do you want to use the infrared pi camera and proceed with the same source code?
Answers I’ll wait. Thank you.
HOG + Linear SVM will be very slow on the Pi. If you want to try you can follow this tutorial.
Hello This is a wonderful post!
I have a few questions.
First, I want to use an infrared pi camera. Is there anything to modify in the source code section?
Second, is there any code for HOG + Linear SVM?
1. I haven’t tested this code with an infrared camera. You would need to test it and see.
2. You mean HOG + Linear SVM for face detection? Or arbitrary object detection?
Pls does anyone have an idea on how i can change the video source to be streamed in a python GUI???
. I have already created the GUI but i cannot stream the video to the GUI interface. Im using GUI because i added some features that need to be in the gui
Have you tried this tutorial?
thanks. exactly what im looking for
Hello!
I implemented the project according to the posting, but the fps is about 5-7. So I’m going to add Movidius NCS.
Movidius NCS is based on caffe and tensorflow, and can it help projects in this posting?
I have purchased Movidius NCS and I do not know how to do the initial setup and installation.
I would appreciate your help.
hi Guys,
i tried adding some functions in the for loop (main loop of the program). i
even tried using threading (declaring functions out of the loop and using threading
to call them) in the for loop so as to avoid the video streaming from
slowing down. however, with all these precautions, the streaming is quite slow. pls
what do u suggest i do in order to add some functions in the for loop and at the
same maintain a normal streaming speed?
I would rather say, the for loop is quite fragile. Ur suggestions will really help.
Thanks
Hey, Can I use Infra Red web camera instead of camera used here?
I haven’t tried this code with an infrared camera. Give it a try and see! I would love to know.
if i dont use traffic hat what should i do to make alarm to driver and what are the updation in code please can you give any sugestion
Have you tried using a speaker instead?
I am using a simple 5v piezo buzzer instead of traffic hat please help me out in this case which lines I need to change in the code
Sorry, I am not familiar with that buzzer. You should refer to the documentation associated with your buzzer.
hey, i’m getting an error as the following arguments are required: -c/–cascade, -p/–shape-predictor. I’ve read your tutorial on Python, argparse, and command line arguments. But Both the files are nn the same directory as others but getting these error. Please help me out.
Read this tutorial on command line arguments and how to use them.
Fantastic Article !
Thanks a lot Adrian.
I’m glad you enjoyed it 🙂
Dear Dr. Adrian,
Your projects are very very amazing and important. Thak you to share your knowledge.
Right now, my drowsiness detector is working on my Raspberry Pi in real time. It was difficult to implement some libraries but at the end, it’s working very good.
Could you illuminate to me, how can I detect the eyes through sunglasses? Help me please!!!
Greetings and hugs from Ecuador | https://www.pyimagesearch.com/2017/10/23/raspberry-pi-facial-landmarks-drowsiness-detection-with-opencv-and-dlib/ | CC-MAIN-2019-35 | refinedweb | 10,867 | 72.97 |
So, let's start from the beginning, what are functional components? Well, those are components that are more lightweight because they don't have any data, or computed, nor lifecycle events. They can be treated as just functions that are re-executed once the parameters passed down to it changes.
For more information you can read the official docs for it, or this cool blog post by Nora Brown, or both. They also have a modified API, for reasons that I don't yet know, but now that I'm mentioning it, I got curious, so I might try checking it out afterwards.
But is it really that better? Honestly, I don't really know; I'm just trusting other people on this one. Since it doesn't have to manage reactivity it should be better, because its running less code to get the same results. But how much better? I don't know. I couldn't find an answer and I'm hoping someone will answer this question on the comments.
You know what? I'll tweet this post to the core team (aka Sarah Drasner), and we'll all hope together that we will get our answers, ok? 😂 😅
The ugly parts of this
Ok, so functional components in vue are cool and all, but there are some problems with it, right? I mean, you could very well use the
render() function to do it all and it be happy with it, because with the render function you can better organize your code.
But the wall of learning this syntax when you are so used to the
<template>is so high, let alone the fact that you'll be using this render function when everyone else on your project is used to the template, now we have two syntax in the project and its all your fault, you should be ashamed of yourself, you smarty pants.
— read this very fast, for the aesthetics 😎
You could also try the React way and add to the project the JSX syntax of using html inside js, configuring webpack to understand this syntax, BUUUUT
...the wall of learning this syntax when you are so used to the
<template>is not so high compared to the other, but still, let alone the fact that you'll be using this render function with JSX when everyone else on your project is used to the template, now we have two syntax in the project and its all your fault, you should be ashamed of yourself, you smarty pants.
— read this very fast, for the aesthetics 😎
I know because I tried doing this (cuz I'm a smarty pants (is this slang still used? I learned this in the school 😂 (now I feel like I'm programming in lisp))) but my render function syntax didn't survived the code review.
So, we all hopefully agree that Vue is nice for the simplicity and we should stick with the template syntax because it's
s i m p l e r. Now, if you have a team of smarty pants and you all like to work with template and render functions on the same project, then go ahead and be free, don't listen to me,
also, send me your recuiter's email.
That out of the way, I had some problems with functional components in Vue.js that I wanted to vent out here, and hopefully help anyone with the same problems:
- how on earth do you call a
methodfrom the template? Is it even possible?
- where are my props? And my
$listenersand
$attrs?
- why vue can't find my custom component inside the functional component despite it being registered with the
componentsoption?
- why the custom classes I put on the component from the outside don't get applied?
Executing functions from the template
"I received some data through props, but I want to format it nicely using a custom function. How do I do that? I'm getting an
undefined is not a functionerror, I'll go crazy!"
— Me before figuring it out
Consider the following
<script> part of a component:
<script> export default { name: 'DisplayDate', props: { date: { type: String, required: true, }, }, methods: { format(date) { return new Date(date).toLocaleString() }, }, } </script>
For some reason, functional components don't have access to the vue instance, I suppose it's because there is no Vue instance to begin with, but I could be wrong. So, to access the methods we can't just:
<template functional> <span>{{ format(date) }}</span> </template>
We have to take another path, just
format won't do, we have to do an
$options.methods.format(date). There, this works. It's ugly, but it works. Anyone has a suggestion to make this better?
<template functional> <span>{{ $options.methods.format(date) }}</span> </template>
Anyway if you execute this, you will notice I just lied to you when I said it works...
Accessing props, listeners and attrs?
The reason its not working is because, again, there is no Vue instance, so when the Vue Loader transforms your template into pure JavaScript, it just can't find the
date you just typed. It needs a context, so you have to declare a path for Vue to find it, like we did with the method.
<template functional> <span>{{ $options.methods.format(props.date) }}</span> </template>
"And the
$attrsand
$listeners, I really want to make a full extensible open-closed component, so I kinda need this"
— also me
Those are available too, only in different places. The
$attrs is now at
data.attrs and the
$listeners is at the
listeners (which is an alias to
data.on, but as a suggestion, I'd stick with the new
listeners).
$attrs
For those who didn't even knew this was a thing, let me clarify. In non-functional components,
$attrs is used to represent every attribute passed down to your component declared in props or not. That means, if we have the
DisplayDate components called like the following:
<div> <DisplayDate : </div>
And we have the declaration like we already defined up there (
<span>{{ $options.methods.format(props.date) }}</span>), The
aria-label prop will be ignored. But if we declare the
DisplayDate like the following, the extra attributes passed to the
DisplayDate will be applied to the span, as we indicated.
<template functional> <span v-{{ $options.methods.format(props.date) }}</span> </template>
But as of course we are in functional land; nothing is easy, and the API is different 🤷♂️. When we are talking about functional components, now the
data.attrs only contains the attributes passed down to the component but only the one not declared on the props, in the non-functional the
$attrs have the value of
{ date: '...', ariaLabel: '...' }, on the functional, the
data.attrs have the value of
{ ariaLabel: '...' } and the
props have
{ date: '...' }.
$listeners
Same thing with the
$listeners, but for events. That means, when you try to apply
@click event to a component, but you haven't declared this explicitly, it will not work, unless you use the
$listeners to proxy the listeners handling to a different element or component.
<!-- this is explicitly declaration --> <button @Click me</button> <!-- this is the 'proxing' declaration --> <button v-Click me</button> <!-- this is the 'proxing' declaration for functional components --> <button v-Click me</button>
There is, once more, a different between the functional and non-functional components API for this. The non-functional components deal with
.native events automagically, while the functional component is not sure if there's even a root element to apply the
.native events, so Vue exposes the
data.nativeOn property for you to handle the
.native events the you want.
Outside declared css classes on the component
<MyTitle title="Let's go to the mall, today!" class="super-bold-text" />
Another problem you may face, is about classes. Normally in Vue (as of today), when you pass a class to a custom component of yours, without explicitly configuring anything, it will be applied to the root element of your component, differently from react that it's explicit where the class is going.
Take the example above — assuming the css class does what it says it does and the title had no
text-weight defined in the css and it is a non-functional component — the title would display as a bold text.
Now if we edit the
MyTitle component like the following, transforming it to a functional component, the rendered text wouldn't be bold anymore, and that may feel very frustrating, I know because I felt it that way 😅.
-<template> +<template functional> <span> - {{ title }} + {{ props.title }} </span> </template> <script> export default props: ['title'] // disclaimer: I don't recommend the array syntax for this } </script>
And thats because... thats just because we are using functional components, and they are the way they are... 🤷♂️. Now, serious, to make this work you will have to add a little more code, it's nothing, really:
@@ -0,5 +0,5 @@ <template functional> - <span> + <span : {{ props.title }} </span> </template>
The
data.staticClass represents all the classes passed down to your component (I assume only the not dynamic ones, will check it later, hopefully I will remember to edit the post). So what you can do is use this variable to merge with other classes you may be declaring:
<span : {{ props.title }} </span>
Custom component inside the functional component
So here we have a problem. One that I don't know how to solve gracefully. Custom components can't be declared inside functional components, at least not in the way you'd expect. The
components property on the vue export:
<template functional> <MyCustomComponents1> I'd better be sailing </MyCustomComponents1> </template> <script> export default { components: { // <- this here MyCustomComponents1, } } </script>
Just doesn't work. It would display the bare text "I'd better be sailing", because it cannot render an unknown component.
Despite it being declared down there, Vue just doesn't look to that property, and even worse, it doesn't even say anything, like a warning or an error: "Warning, components are not registrable on functional components" or something. The
components property is useless.
Now, there are people who already raised this issue and that come up with a workaround to that problem, but I don't really like how it looks 😅, I mean, take a look at it:
<template> <component : I'd better be sailing </component> </template> <script> import MyCustomComponents1 from '...' export default { inject: { components: { default: { MyCustomComponents1, } } } } </script>
There is also the option to register all the components you'll need in the global scope or to register the components you need on the parent that will host your functional component.
The latter is not a sane option because it makes the two components — the parent and the functional component — very tightly coupled, which is generally a bad idea.
import Vue from 'vue' import MyCustomComponents1 from '...' // And so on... Vue.component('MyCustomComponents1', MyCustomComponents1) Vue.component('AndSoOn', AndSoOn) //... new Vue({ el: '#app', // ... });
This problem leads me to think functional components weren't thought out to be used with the template syntax, because the only reasonable approach to use custom components inside functional ones is to use the render function, look at that, it's elegant:
import MyCustomComponents1 from '...' //... render(h) { return h(MyCustomComponents1, {}, ['I\'d better be sailing']) }
What is wrong with all this?
What you have to imagine when you are doing functional template, is like you are writing a function that returns a JSX syntax, and the Vue Loader is calling your template more or less like this:
render(h, { data, listeners, $options, /* the rest of the exposed variables...*/ }) { return ( <template functional> <component : {{ $options.methods.format(props.date) }} </component> </template> ) },
So we have access to those parameters, and nothing else. The problem with this is, when you are using a functional component with the render function syntax or with JSX, you have access to the body of the function to do destructuring, contextualization, separate things, process data, like the following.
import MyCustomComponents1 from '...' import { format } from '...' render(h, { data, listeners }) { const { date } = data.props // this is not proper JSX, but I hope you get the point return ( <template functional> <MyCustomComponents1 v- {{ format(date) }} </MyCustomComponents1> </template> ) },
This is a very small example, but I hope I can get the idea through. And the component markup syntax went back to being simple and easy to read, but when you are using the template syntax with vue functional component, you don't have access to this part of the function.
Future?
I really just hope that that controversial Request for Comments (EDIT: this was updated and now we are talking about this one) will live to see light and we get this better syntax that have all the benefits of performance and readability we all want.
Anyway, I hope I could help you with any problems you may be facing, I had a hard time searching for some information in there, I hope with this post you'll have less of a hard time. Thanks for reading till here, I hope you are having an awesome day, see you next time.
Posted on by:
Read Next
How to Reduce Your Vue.JS Bundle Size With Webpack
Jennifer Bland -
Vue's Darkest Day
Daniel Elkington -
Adding real-time updates to your Laravel and Vue apps with laravel-websockets
Andrew Schmelyun -
Discussion
It has been a while but maybe someone will benefit from it:
You don't have to declare your methods in a
methodsproperty.
and then in a template
Yeah, you are right, at the time I hadn't realize that yet. Thanks for your comment :D
@vinicius - how does the composition API address the problem? composition-api.vuejs.org/
First of all, sorry for the late reply :(
But about your comment: Vue 3 does not recommend using functional components anymore because of all the optimizations done to stateful components, rendering the performance benefits of functional components "negligible" (as they say). You can see more information about this in the link bellow.
v3.vuejs.org/guide/migration/funct...
That addressed, I will admit that I didn't look into how Vue 3 + Composition API + functional components would work in regards to the issues I pointed out, just because I don't intend on using them anymore :D
If I can help you any further, I will be happy to help. | https://dev.to/vhoyer/functional-components-in-vue-js-20fl | CC-MAIN-2020-40 | refinedweb | 2,387 | 60.45 |
Hi,
We are using Large Data Type Bins to store Strings (User Id / 36 bytes each) on a daily basis. Each day a new bin is created and users are added to the LList. Example:
20151020 => LList: { user1, user2, user3 .. user n } 20151021 => LList: { user1, user4, user10 .. user n }
Each day can potentially hold millions of users. Our understanding of Large Lists is that it is not bound to its parent record with regards to size limit so it should not be a problem in this case. The namespace is configured with a 128K block size and the Set that stores those users contains another 5 bins, which are quite small in size.
This particular case requires a variable number of LDT bins but the plan is to store up to 6 months worth of users data (~180 bins). However after about 20 days (i.e. 20 LDT bins) we began seeing errors in the logs:
1427: LDT-TOP Record Update Error
We initially thought that Aerospike was trying to update (e.g. add users) records on the LList, which turned out to be incorrect as in fact it failed on adding a new LDT bin. We then tried to add a normal bin (String) which also failed but this time with a different error message (AEROSPIKE_ERR_RECORD_TOO_BIG).
Has anyone come across with similar issues with LDTs? We are obviously hitting some sort of limit but it is not clear to us whether our approach is wrong or the product has limitations/bugs relating to large number of LDT bins.
The version we are running is “Aerospike Community Edition build 3.5.15”. | https://discuss.aerospike.com/t/record-size-limit-with-many-ldt-bins/2029 | CC-MAIN-2018-30 | refinedweb | 272 | 78.89 |
On Mon, 30 Jun 1997, Foteos Macrides wrote: > The v0.8.0 SSLeay distribution is a major change in API, and > is too buggy for serious use. For example, its crypo.c has a dangling > #endif which may or may not signal a missing #ifdef (I'm not sure yet). > Its ssl.c wants to include a pxy_ssl.c which isn't in the v0.8.0 > distribution. As Tom posted, you should continue to use a v0.6.n > version (there isn't a v0.7 :) with Lynx. I've been playing with the > v0.8.0 library, and so there is code for it, but it's #ifdef'ed for > an OLD_SSL_LIB compilation symbol which you should define in the > Makefile (Unix) or libmakessl.com (VMS) for compiling and linking > with an v0.6.n library. Code releases are like frozen waffles, you should always throw the first one away :). The two problems you describe come from a problem with scripts used to generate the master files for building monolithic crypto.o and ssl.o objects, or for their method of building a shared library. It also probably applies to your current platform. Removing the trailing #endif and the #include <pxy_ssl.c> line lets things compile properly. The only reason you might want to use 0.8.0 is to test SSLv3 support (and v2 backdown). As I hinted, I now have proxies that work under Linux (both DEC alpha and x86!) using 0.8.0 and I will email them under the same terms (to US or Canadian citizens, don't violate RSA's patents, etc.). ; ; To UNSUBSCRIBE: Send a mail message to address@hidden ; with "unsubscribe lynx-dev" (without the ; quotation marks) on a line by itself. ; | http://lists.gnu.org/archive/html/lynx-dev/1997-07/msg00017.html | CC-MAIN-2015-27 | refinedweb | 291 | 77.43 |
This tutorial depends on step-2.
This is the first example where we actually use finite elements to compute something. We will solve a simple version of Poisson's equation with zero boundary values, but a nonzero right hand side:
\begin{align*} -\Delta u &= f \qquad\qquad & \text{in}\ \Omega, \\ u &= 0 \qquad\qquad & \text{on}\ \partial\Omega. \end{align*}
We will solve this equation on the square, \(\Omega=[-1,1]^2\), for which you've already learned how to generate a mesh in step-1 and step-2. In this program, we will also only consider the particular case \(f(\mathbf x)=1\) and come back to how to implement the more general case in the next tutorial program, step-4.
If you've learned about the basics of the finite element method, you will remember the steps we need to take to approximate the solution \(u\) by a finite dimensional approximation. Specifically, we first need to derive the weak form of the equation above, which we obtain by multiplying the equation by a test function \(\varphi\) from the left (we will come back to the reason for multiplying from the left and not from the right below) and integrating over the domain \(\Omega\):
\begin{align*} -\int_\Omega \varphi \Delta u = \int_\Omega \varphi f. \end{align*}
This can be integrated by parts:
\begin{align*} \int_\Omega \nabla\varphi \cdot \nabla u - \int_{\partial\Omega} \varphi \mathbf{n}\cdot \nabla u = \int_\Omega \varphi f. \end{align*}
The test function \(\varphi\) has to satisfy the same kind of boundary conditions (in mathematical terms: it needs to come from the tangent space of the set in which we seek the solution), so on the boundary \(\varphi=0\) and consequently the weak form we are looking for reads
\begin{align*} (\nabla\varphi, \nabla u) = (\varphi, f), \end{align*}
where we have used the common notation \((a,b)=\int_\Omega a\; b\). The problem then asks for a function \(u\) for which this statement is true for all test functions \(\varphi\) from the appropriate space (which here is the space \(H^1\)).
Of course we can't find such a function on a computer in the general case, and instead we seek an approximation \(u_h(\mathbf x)=\sum_j U_j \varphi_j(\mathbf x)\), where the \(U_j\) are unknown expansion coefficients we need to determine (the "degrees of freedom" of this problem), and \(\varphi_i(\mathbf x)\) are the finite element shape functions we will use. To define these shape functions, we need the following:
Through these steps, we now have a set of functions \(\varphi_i\), and we can define the weak form of the discrete problem: Find a function \(u_h\), i.e., find the expansion coefficients \(U_j\) mentioned above, so that
\begin{align*} (\nabla\varphi_i, \nabla u_h) = (\varphi_i, f), \qquad\qquad i=0\ldots N-1. \end{align*}
Note that we here follow the convention that everything is counted starting at zero, as common in C and C++. This equation can be rewritten as a linear system if you insert the representation \(u_h(\mathbf x)=\sum_j U_j \varphi_j(\mathbf x)\) and then observe that
\begin{align*} (\nabla\varphi_i, \nabla u_h) &= \left(\nabla\varphi_i, \nabla \Bigl[\sum_j U_j \varphi_j\Bigr]\right) \\ &= \sum_j \left(\nabla\varphi_i, \nabla \left[U_j \varphi_j\right]\right) \\ &= \sum_j \left(\nabla\varphi_i, \nabla \varphi_j \right) U_j. \end{align*}
With this, the problem reads: Find a vector \(U\) so that
\begin{align*} A U = F, \end{align*}
where the matrix \(A\) and the right hand side \(F\) are defined as
\begin{align*} A_{ij} &= (\nabla\varphi_i, \nabla \varphi_j), \\ F_i &= (\varphi_i, f). \end{align*}
Before we move on with describing how these quantities can be computed, note that if we had multiplied the original equation from the right by a test function rather than from the left, then we would have obtained a linear system of the form
\begin{align*} U^T A = F^T \end{align*}
with a row vector \(F^T\). By transposing this system, this is of course equivalent to solving
\begin{align*} A^T U = F \end{align*}
which here is the same as above since \(A=A^T\). But in general is not, and in order to avoid any sort of confusion, experience has shown that simply getting into the habit of multiplying the equation from the left rather than from the right (as is often done in the mathematical literature) avoids a common class of errors as the matrix is automatically correct and does not need to be transposed when comparing theory and implementation. See step-9 for the first example in this tutorial where we have a non-symmetric bilinear form for which it makes a difference whether we multiply from the right or from the left.
Now we know what we need (namely: objects that hold the matrix and vectors, as well as ways to compute \(A_{ij},F_i\)), and we can look at what it takes to make that happen:
\begin{align*} A_{ij} &= (\nabla\varphi_i, \nabla \varphi_j) = \sum_{K \in {\mathbb T}} \int_K \nabla\varphi_i \cdot \nabla \varphi_j, \\ F_i &= (\varphi_i, f) = \sum_{K \in {\mathbb T}} \int_K \varphi_i f, \end{align*}and then approximate each cell's contribution by quadrature:
\begin{align*} A^K_{ij} &= \int_K \nabla\varphi_i \cdot \nabla \varphi_j \approx \sum_q \nabla\varphi_i(\mathbf x^K_q) \cdot \nabla \varphi_j(\mathbf x^K_q) w_q^K, \\ F^K_i &= \int_K \varphi_i f \approx \sum_q \varphi_i(\mathbf x^K_q) f(\mathbf x^K_q) w^K_q, \end{align*}where \(\mathbf x^K_q\) is the \(q\)th quadrature point on cell \(K\), and \(w^K_q\) the \(q\)th quadrature weight. There are different parts to what is needed in doing this, and we will discuss them in turn next.
FEValues really is the central class in the assembly process. One way you can view it is as follows: The FiniteElement and derived classes describe shape functions, i.e., infinite dimensional objects: functions have values at every point. We need this for theoretical reasons because we want to perform our analysis with integrals over functions. However, for a computer, this is a very difficult concept, since they can in general only deal with a finite amount of information, and so we replace integrals by sums over quadrature points that we obtain by mapping (the Mapping object) using points defined on a reference cell (the Quadrature object) onto points on the real cell. In essence, we reduce the problem to one where we only need a finite amount of information, namely shape function values and derivatives, quadrature weights, normal vectors, etc, exclusively at a finite set of points. The FEValues class is the one that brings the three components together and provides this finite set of information on a particular cell \(K\). You will see it in action when we assemble the linear system below.
It is noteworthy that all of this could also be achieved if you simply created these three objects yourself in an application program, and juggled the information yourself. However, this would neither be simpler (the FEValues class provides exactly the kind of information you actually need) nor faster: the FEValues class is highly optimized to only compute on each cell the particular information you need; if anything can be re-used from the previous cell, then it will do so, and there is a lot of code in that class to make sure things are cached wherever this is advantageous.
The final piece of this introduction is to mention that after a linear system is obtained, it is solved using an iterative solver and then postprocessed: we create an output file using the DataOut class that can then be visualized using one of the common visualization programs.
Although this is the simplest possible equation you can solve using the finite element method, this program shows the basic structure of most finite element programs and also serves as the template that almost all of the following programs will essentially follow. Specifically, the main class of this program looks like this:
This follows the object oriented programming mantra of data encapsulation, i.e. we do our best to hide almost all internal details of this class in private members that are not accessible to the outside.
Let's start with the member variables: These follow the building blocks we have outlined above in the bullet points, namely we need a Triangulation and a DoFHandler object, and a finite element object that describes the kinds of shape functions we want to use. The second group of objects relate to the linear algebra: the system matrix and right hand side as well as the solution vector, and an object that describes the sparsity pattern of the matrix. This is all this class needs (and the essentials that any solver for a stationary PDE requires) and that needs to survive throughout the entire program. In contrast to this, the FEValues object we need for assembly is only required throughout assembly, and so we create it as a local object in the function that does that and destroy it again at its end.
Secondly, let's look at the member functions. These, as well, already form the common structure that almost all following tutorial programs will use:
make_grid(): This is what one could call a preprocessing function. As its name suggests, it sets up the object that stores the triangulation. In later examples, it could also deal with boundary conditions, geometries, etc.
setup_system(): This then is the function in which all the other data structures are set up that are needed to solve the problem. In particular, it will initialize the DoFHandler object and correctly size the various objects that have to do with the linear algebra. This function is often separated from the preprocessing function above because, in a time dependent program, it may be called at least every few time steps whenever the mesh is adaptively refined (something we will see how to do in step-6). On the other hand, setting up the mesh itself in the preprocessing function above is done only once at the beginning of the program and is, therefore, separated into its own function.
assemble_system(): This, then is where the contents of the matrix and right hand side are computed, as discussed at length in the introduction above. Since doing something with this linear system is conceptually very different from computing its entries, we separate it from the following function.
solve(): This then is the function in which we compute the solution \(U\) of the linear system \(AU=F\). In the current program, this is a simple task since the matrix is so simple, but it will become a significant part of a program's size whenever the problem is not so trivial any more (see, for example, step-20, step-22, or step-31 once you've learned a bit more about the library).
output_results(): Finally, when you have computed a solution, you probably want to do something with it. For example, you may want to output it in a format that can be visualized, or you may want to compute quantities you are interested in: say, heat fluxes in a heat exchanger, air friction coefficients of a wing, maximum bridge loads, or simply the value of the numerical solution at a point. This function is therefore the place for postprocessing your solution.
All of this is held together by the single public function (other than the constructor), namely the
run() function. It is the one that is called from the place where an object of this type is created, and it is the one that calls all the other functions in their proper order. Encapsulating this operation into the
run() function, rather than calling all the other functions from
main() makes sure that you can change how the separation of concerns within this class is implemented. For example, if one of the functions becomes too big, you can split it up into two, and the only places you have to be concerned about changing as a consequence are within this very same class, and not anywhere else.
As mentioned above, you will see this general structure — sometimes with variants in spelling of the functions' names, but in essentially this order of separation of functionality — again in many of the following tutorial programs.
deal.II defines a number of integral types via alias in namespace types. (In the previous sentence, the word "integral" is used as the adjective that corresponds to the noun "integer". It shouldn't be confused with the noun "integral" that represents the area or volume under a curve or surface. The adjective "integral" is widely used in the C++ world in contexts such as "integral type", "integral constant", etc.) In particular, in this program you will see types::global_dof_index in a couple of places: an integer type that is used to denote the global index of a degree of freedom, i.e., the index of a particular degree of freedom within the DoFHandler object that is defined on top of a triangulation (as opposed to the index of a particular degree of freedom within a particular cell). For the current program (as well as almost all of the tutorial programs), you will have a few thousand to maybe a few million unknowns globally (and, for \(Q_1\) elements, you will have 4 locally on each cell in 2d and 8 in 3d). Consequently, a data type that allows to store sufficiently large numbers for global DoF indices is
unsigned int given that it allows to store numbers between 0 and slightly more than 4 billion (on most systems, where integers are 32-bit). In fact, this is what types::global_dof_index is.
So, why not just use
unsigned int right away? deal.II used to do this until version 7.3. However, deal.II supports very large computations (via the framework discussed in step-40) that may have more than 4 billion unknowns when spread across a few thousand processors. Consequently, there are situations where
unsigned int is not sufficiently large and we need a 64-bit unsigned integral type. To make this possible, we introduced types::global_dof_index which by default is defined as simply
unsigned int whereas it is possible to define it as
unsigned long long int if necessary, by passing a particular flag during configuration (see the ReadMe file).
This covers the technical aspect. But there is also a documentation purpose: everywhere in the library and codes that are built on it, if you see a place using the data type types::global_dof_index, you immediately know that the quantity that is being referenced is, in fact, a global dof index. No such meaning would be apparent if we had just used
unsigned int (which may also be a local index, a boundary indicator, a material id, etc.). Immediately knowing what a variable refers to also helps avoid errors: it's quite clear that there must be a bug if you see an object of type types::global_dof_index being assigned to variable of type types::subdomain_id, even though they are both represented by unsigned integers and the compiler will, consequently, not complain.
In more practical terms what the presence of this type means is that during assembly, we create a \(4\times 4\) matrix (in 2d, using a \(Q_1\) element) of the contributions of the cell we are currently sitting on, and then we need to add the elements of this matrix to the appropriate elements of the global (system) matrix. For this, we need to get at the global indices of the degrees of freedom that are local to the current cell, for which we will always use the following piece of the code:
where
local_dof_indices is declared as
The name of this variable might be a bit of a misnomer – it stands for "the global indices of those degrees of freedom locally defined on the current cell" – but variables that hold this information are universally named this way throughout the library.
unsigned intcorresponds to, say, a material indicator.
These include files are already known to you. They declare the classes which handle triangulations and enumeration of degrees of freedom:
And this is the file in which the functions are declared that create grids:
This file contains the description of the Lagrange interpolation finite element:
And this file is needed for the creation of sparsity patterns of sparse matrices, as shown in previous examples:
The next two files are needed for assembling the matrix using quadrature on each cell. The classes declared in them will be explained below:
The following three include files we need for the treatment of boundary values:
We're now almost to the end. The second to last group of include files is for the linear algebra which we employ to solve the system of equations arising from the finite element discretization of the Laplace equation. We will use vectors and full matrices for assembling the system of equations locally on each cell, and transfer the results into a sparse matrix. We will then use a Conjugate Gradient solver to solve the problem, for which we need a preconditioner (in this program, we use the identity preconditioner which does nothing, but we need to include the file anyway):
Finally, this is for output to a file and to the console:
...and this is to import the deal.II namespace into the global scope:
Step3class
Instead of the procedural programming of previous examples, we encapsulate everything into a class for this program. The class consists of functions which each perform certain aspects of a finite element program, a
main function which controls what is done first and what is done next, and a list of member variables.
The public part of the class is rather short: it has a constructor and a function
run that is called from the outside and acts as something like the
main function: it coordinates which operations of this class shall be run in which order. Everything else in the class, i.e. all the functions that actually do anything, are in the private section of the class:
Then there are the member functions that mostly do what their names suggest and whose have been discussed in the introduction already. Since they do not need to be called from outside, they are made private to this class.
And finally we have some member variables. There are variables describing the triangulation and the global numbering of the degrees of freedom (we will specify the exact polynomial degree of the finite element in the constructor of this class)...
...variables for the sparsity pattern and values of the system matrix resulting from the discretization of the Laplace equation...
...and variables which will hold the right hand side and solution vectors.
Here comes the constructor. It does not much more than first to specify that we want bi-linear elements (denoted by the parameter to the finite element object, which indicates the polynomial degree), and to associate the dof_handler variable to the triangulation we use. (Note that the triangulation isn't set up with a mesh at all at the present time, but the DoFHandler doesn't care: it only wants to know which triangulation it will be associated with, and it only starts to care about an actual mesh once you try to distribute degree of freedom on the mesh using the distribute_dofs() function.) All the other member variables of the Step3 class have a default constructor which does all we want.
Now, the first thing we've got to do is to generate the triangulation on which we would like to do our computation and number each vertex with a degree of freedom. We have seen these two steps in step-1 and step-2 before, respectively.
This function does the first part, creating the mesh. We create the grid and refine all cells five times. Since the initial grid (which is the square \([-1,1] \times [-1,1]\)) consists of only one cell, the final grid has 32 times 32 cells, for a total of 1024.
Unsure that 1024 is the correct number? We can check that by outputting the number of cells using the
n_active_cells() function on the triangulation.
triangulation.n_cells()instead in the code above, you would consequently get a value of 1365 instead. On the other hand, the number of cells (as opposed to the number of active cells) is not typically of much interest, so there is no good reason to print it.
Next we enumerate all the degrees of freedom and set up matrix and vector objects to hold the system data. Enumerating is done by using DoFHandler::distribute_dofs(), as we have seen in the step-2 example. Since we use the FE_Q class and have set the polynomial degree to 1 in the constructor, i.e. bilinear elements, this associates one degree of freedom with each vertex. While we're at generating output, let us also take a look at how many degrees of freedom are generated:
There should be one DoF for each vertex. Since we have a 32 times 32 grid, the number of DoFs should be 33 times 33, or 1089.
As we have seen in the previous example, we set up a sparsity pattern by first creating a temporary structure, tagging those entries that might be nonzero, and then copying the data over to the SparsityPattern object that can then be used by the system matrix.
Note that the SparsityPattern object does not hold the values of the matrix, it only stores the places where entries are. The entries themselves are stored in objects of type SparseMatrix, of which our variable system_matrix is one.
The distinction between sparsity pattern and matrix was made to allow several matrices to use the same sparsity pattern. This may not seem relevant here, but when you consider the size which matrices can have, and that it may take some time to build the sparsity pattern, this becomes important in large-scale problems if you have to store several matrices in your program.
The last thing to do in this function is to set the sizes of the right hand side vector and the solution vector to the right values:
The next step is to compute the entries of the matrix and right hand side that form the linear system from which we compute the solution. This is the central function of each finite element program and we have discussed the primary steps in the introduction already.
The general approach to assemble matrices and vectors is to loop over all cells, and on each cell compute the contribution of that cell to the global matrix and right hand side by quadrature. The point to realize now is that we need the values of the shape functions at the locations of quadrature points on the real cell. However, both the finite element shape functions as well as the quadrature points are only defined on the reference cell. They are therefore of little help to us, and we will in fact hardly ever query information about finite element shape functions or quadrature points from these objects directly.
Rather, what is required is a way to map this data from the reference cell to the real cell. Classes that can do that are derived from the Mapping class, though one again often does not have to deal with them directly: many functions in the library can take a mapping object as argument, but when it is omitted they simply resort to the standard bilinear Q1 mapping. We will go this route, and not bother with it for the moment (we come back to this in step-10, step-11, and step-12).
So what we now have is a collection of three classes to deal with: finite element, quadrature, and mapping objects. That's too much, so there is one type of class that orchestrates information exchange between these three: the FEValues class. If given one instance of each three of these objects (or two, and an implicit linear mapping), it will be able to provide you with information about values and gradients of shape functions at quadrature points on a real cell.
Using all this, we will assemble the linear system for this problem in the following function:
Ok, let's start: we need a quadrature formula for the evaluation of the integrals on each cell. Let's take a Gauss formula with two quadrature points in each direction, i.e. a total of four points since we are in 2D. This quadrature formula integrates polynomials of degrees up to three exactly (in 1D). It is easy to check that this is sufficient for the present problem:
And we initialize the object which we have briefly talked about above. It needs to be told which finite element we want to use, and the quadrature points and their weights (jointly described by a Quadrature object). As mentioned, we use the implied Q1 mapping, rather than specifying one ourselves explicitly. Finally, we have to tell it what we want it to compute on each cell: we need the values of the shape functions at the quadrature points (for the right hand side \((\varphi_i,f)\)), their gradients (for the matrix entries \((\nabla \varphi_i, \nabla \varphi_j)\)), and also the weights of the quadrature points and the determinants of the Jacobian transformations from the reference cell to the real cells.
This list of what kind of information we actually need is given as a collection of flags as the third argument to the constructor of FEValues. Since these values have to be recomputed, or updated, every time we go to a new cell, all of these flags start with the prefix
update_ and then indicate what it actually is that we want updated. The flag to give if we want the values of the shape functions computed is update_values; for the gradients it is update_gradients. The determinants of the Jacobians and the quadrature weights are always used together, so only the products (Jacobians times weights, or short
JxW) are computed; since we need them, we have to list update_JxW_values as well:
The advantage of this approach is that we can specify what kind of information we actually need on each cell. It is easily understandable that this approach can significantly speed up finite element computations, compared to approaches where everything, including second derivatives, normal vectors to cells, etc are computed on each cell, regardless of whether they are needed or not.
update_values | update_gradients | update_JxW_valuesis not immediately obvious to anyone not used to programming bit operations in C for years already. First,
operator|is the bitwise or operator, i.e., it takes two integer arguments that are interpreted as bit patterns and returns an integer in which every bit is set for which the corresponding bit is set in at least one of the two arguments. For example, consider the operation
9|10. In binary,
9=0b1001(where the prefix
0bindicates that the number is to be interpreted as a binary number) and
10=0b1010. Going through each bit and seeing whether it is set in one of the argument, we arrive at
0b1001|0b1010=0b1011or, in decimal notation,
9|10=11. The second piece of information you need to know is that the various
update_*flags are all integers that have exactly one bit set. For example, assume that
update_values=0b00001=1,
update_gradients=0b00010=2,
update_JxW_values=0b10000=16. Then
update_values | update_gradients | update_JxW_values = 0b10011 = 19. In other words, we obtain a number that encodes a binary mask representing all of the operations you want to happen, where each operation corresponds to exactly one bit in the integer that, if equal to one, means that a particular piece should be updated on each cell and, if it is zero, means that we need not compute it. In other words, even though
operator|is the bitwise OR operation, what it really represents is I want this AND that AND the other. Such binary masks are quite common in C programming, but maybe not so in higher level languages like C++, but serve the current purpose quite well.
For use further down below, we define a shortcut for a value that will be used very frequently. Namely, an abbreviation for the number of degrees of freedom on each cell (since we are in 2D and degrees of freedom are associated with vertices only, this number is four, but we rather want to write the definition of this variable in a way that does not preclude us from later choosing a different finite element that has a different number of degrees of freedom per cell, or work in a different space dimension).
In general, it is a good idea to use a symbolic name instead of hard-coding these numbers even if you know them, since for example, you may want to change the finite element at some time. Changing the element would have to be done in a different function and it is easy to forget to make a corresponding change in another part of the program. It is better to not rely on your own calculations, but instead ask the right object for the information: Here, we ask the finite element to tell us about the number of degrees of freedom per cell and we will get the correct number regardless of the space dimension or polynomial degree we may have chosen elsewhere in the program.
The shortcut here, defined primarily to discuss the basic concept and not because it saves a lot of typing, will then make the following loops a bit more readable. You will see such shortcuts in many places in larger programs, and
dofs_per_cell is one that is more or less the conventional name for this kind of object.
Now, we said that we wanted to assemble the global matrix and vector cell-by-cell. We could write the results directly into the global matrix, but this is not very efficient since access to the elements of a sparse matrix is slow. Rather, we first compute the contribution of each cell in a small matrix with the degrees of freedom on the present cell, and only transfer them to the global matrix when the computations are finished for this cell. We do the same for the right hand side vector. So let's first allocate these objects (these being local objects, all degrees of freedom are coupling with all others, and we should use a full matrix object rather than a sparse one for the local operations; everything will be transferred to a global sparse matrix later on):
When assembling the contributions of each cell, we do this with the local numbering of the degrees of freedom (i.e. the number running from zero through dofs_per_cell-1). However, when we transfer the result into the global matrix, we have to know the global numbers of the degrees of freedom. When we query them, we need a scratch (temporary) array for these numbers (see the discussion at the end of the introduction for the type, types::global_dof_index, used here):
Now for the loop over all cells. We have seen before how this works for a triangulation. A DoFHandler has cell iterators that are exactly analogous to those of a Triangulation, but with extra information about the degrees of freedom for the finite element you're using. Looping over the active cells of a degree-of-freedom handler works the same as for a triangulation.
Note that we declare the type of the cell as
const auto & instead of
auto this time around. In step 1, we were modifying the cells of the triangulation by flagging them with refinement indicators. Here we're only examining the cells without modifying them, so it's good practice to declare
cell as
const in order to enforce this invariant.
We are now sitting on one cell, and we would like the values and gradients of the shape functions be computed, as well as the determinants of the Jacobian matrices of the mapping between reference cell and true cell, at the quadrature points. Since all these values depend on the geometry of the cell, we have to have the FEValues object re-compute them on each cell:
Next, reset the local cell's contributions to global matrix and global right hand side to zero, before we fill them:
Now it is time to start integration over the cell, which we do by looping over all quadrature points, which we will number by q_index.
First assemble the matrix: For the Laplace problem, the matrix on each cell is the integral over the gradients of shape function i and j. Since we do not integrate, but rather use quadrature, this is the sum over all quadrature points of the integrands times the determinant of the Jacobian matrix at the quadrature point times the weight of this quadrature point. You can get the gradient of shape function \(i\) at quadrature point with number q_index by using
fe_values.shape_grad(i,q_index); this gradient is a 2-dimensional vector (in fact it is of type Tensor<1,dim>, with here dim=2) and the product of two such vectors is the scalar product, i.e. the product of the two shape_grad function calls is the dot product. This is in turn multiplied by the Jacobian determinant and the quadrature point weight (that one gets together by the call to FEValues::JxW() ). Finally, this is repeated for all shape functions \(i\) and \(j\):
We then do the same thing for the right hand side. Here, the integral is over the shape function i times the right hand side function, which we choose to be the function with constant value one (more interesting examples will be considered in the following programs).
Now that we have the contribution of this cell, we have to transfer it to the global matrix and right hand side. To this end, we first have to find out which global numbers the degrees of freedom on this cell have. Let's simply ask the cell for that information:
Then again loop over all shape functions i and j and transfer the local elements to the global matrix. The global numbers can be obtained using local_dof_indices[i]:
And again, we do the same thing for the right hand side vector.
Now almost everything is set up for the solution of the discrete system. However, we have not yet taken care of boundary values (in fact, Laplace's equation without Dirichlet boundary values is not even uniquely solvable, since you can add an arbitrary constant to the discrete solution). We therefore have to do something about the situation.
For this, we first obtain a list of the degrees of freedom on the boundary and the value the shape function shall have there. For simplicity, we only interpolate the boundary value function, rather than projecting it onto the boundary. There is a function in the library which does exactly this: VectorTools::interpolate_boundary_values(). Its parameters are (omitting parameters for which default values exist and that we don't care about): the DoFHandler object to get the global numbers of the degrees of freedom on the boundary; the component of the boundary where the boundary values shall be interpolated; the boundary value function itself; and the output object.
The component of the boundary is meant as follows: in many cases, you may want to impose certain boundary values only on parts of the boundary. For example, you may have inflow and outflow boundaries in fluid dynamics, or clamped and free parts of bodies in deformation computations of bodies. Then you will want to denote these different parts of the boundary by indicators, and tell the interpolate_boundary_values function to only compute the boundary values on a certain part of the boundary (e.g. the clamped part, or the inflow boundary). By default, all boundaries have a 0 boundary indicator, unless otherwise specified. If sections of the boundary have different boundary conditions, you have to number those parts with different boundary indicators. The function call below will then only determine boundary values for those parts of the boundary for which the boundary indicator is in fact the zero specified as the second argument.
The function describing the boundary values is an object of type Function or of a derived class. One of the derived classes is Functions::ZeroFunction, which describes (not unexpectedly) a function which is zero everywhere. We create such an object in-place and pass it to the VectorTools::interpolate_boundary_values() function.
Finally, the output object is a list of pairs of global degree of freedom numbers (i.e. the number of the degrees of freedom on the boundary) and their boundary values (which are zero here for all entries). This mapping of DoF numbers to boundary values is done by the
std::map class.
Now that we got the list of boundary DoFs and their respective boundary values, let's use them to modify the system of equations accordingly. This is done by the following function call:
The following function simply solves the discretized equation. As the system is quite a large one for direct solvers such as Gauss elimination or LU decomposition, we use a Conjugate Gradient algorithm. You should remember that the number of variables here (only 1089) is a very small number for finite element computations, where 100.000 is a more usual number. For this number of variables, direct methods are no longer usable and you are forced to use methods like CG.
First, we need to have an object that knows how to tell the CG algorithm when to stop. This is done by using a SolverControl object, and as stopping criterion we say: stop after a maximum of 1000 iterations (which is far more than is needed for 1089 variables; see the results section to find out how many were really used), and stop if the norm of the residual is below \(10^{-12}\). In practice, the latter criterion will be the one which stops the iteration:
Then we need the solver itself. The template parameter to the SolverCG class is the type of the vectors, and leaving the empty angle brackets would indicate that we are taking the default argument (which is
Vector<double>). However, we explicitly mention the template argument:
Now solve the system of equations. The CG solver takes a preconditioner as its fourth argument. We don't feel ready to delve into this yet, so we tell it to use the identity operation as preconditioner:
Now that the solver has done its job, the solution variable contains the nodal values of the solution function.
The last part of a typical finite element program is to output the results and maybe do some postprocessing (for example compute the maximal stress values at the boundary, or the average flux across the outflow, etc). We have no such postprocessing here, but we would like to write the solution to a file.
To write the output to a file, we need an object which knows about output formats and the like. This is the DataOut class, and we need an object of that type:
Now we have to tell it where to take the values from which it shall write. We tell it which DoFHandler object to use, and the solution vector (and the name by which the solution variable shall appear in the output file). If we had more than one vector which we would like to look at in the output (for example right hand sides, errors per cell, etc) we would add them as well:
After the DataOut object knows which data it is to work on, we have to tell it to process them into something the back ends can handle. The reason is that we have separated the frontend (which knows about how to treat DoFHandler objects and data vectors) from the back end (which knows many different output formats) and use an intermediate data format to transfer data from the front- to the backend. The data is transformed into this intermediate format by the following function:
Now we have everything in place for the actual output. Just open a file and write the data into it, using VTK format (there are many other functions in the DataOut class we are using here that can write the data in postscript, AVS, GMV, Gnuplot, or some other file formats):
Finally, the last function of this class is the main function which calls all the other functions of the
Step3 class. The order in which this is done resembles the order in which most finite element programs work. Since the names are mostly self-explanatory, there is not much to comment about:
mainfunction
This is the main function of the program. Since the concept of a main function is mostly a remnant from the pre-object oriented era before C++ programming, it often does not do much more than creating an object of the top-level class and calling its principle function.
Finally, the first line of the function is used to enable output of some diagnostics that deal.II can generate. The
deallog variable (which stands for deal-log, not de-allog) represents a stream to which some parts of the library write output. For example, iterative solvers will generate diagnostics (starting residual, number of solver steps, final residual) as can be seen when running this tutorial program.
The output of
deallog can be written to the console, to a file, or both. Both are disabled by default since over the years we have learned that a program should only generate output when a user explicitly asks for it. But this can be changed, and to explain how this can be done, we need to explain how
deallog works: When individual parts of the library want to log output, they open a "context" or "section" into which this output will be placed. At the end of the part that wants to write output, one exits this section again. Since a function may call another one from within the scope where this output section is open, output may in fact be nested hierarchically into these sections. The LogStream class of which
deallog is a variable calls each of these sections a "prefix" because all output is printed with this prefix at the left end of the line, with prefixes separated by colons. There is always a default prefix called "DEAL" (a hint at deal.II's history as the successor of a previous library called "DEAL" and from which the LogStream class is one of the few pieces of code that were taken into deal.II).
By default,
logstream only outputs lines with zero prefixes – i.e., all output is disabled because the default "DEAL" prefix is always there. But one can set a different maximal number of prefixes for lines that should be output to something larger, and indeed here we set it to two by calling LogStream::depth_console(). This means that for all screen output, a context that has pushed one additional prefix beyond the default "DEAL" is allowed to print its output to the screen ("console"), whereas all further nested sections that would have three or more prefixes active would write to
deallog, but
deallog does not forward this output to the screen. Thus, running this example (or looking at the "Results" section), you will see the solver statistics prefixed with "DEAL:CG", which is two prefixes. This is sufficient for the context of the current program, but you will see examples later on (e.g., in step-22) where solvers are nested more deeply and where you may get useful information by setting the depth even higher.
The output of the program looks as follows:
The first two lines is what we wrote to
cout. The last two lines were generated without our intervention by the CG solver. The first two lines state the residual at the start of the iteration, while the last line tells us that the solver needed 47 iterations to bring the norm of the residual to 5.3e-13, i.e. below the threshold 1e-12 which we have set in the `solve' function. We will show in the next program how to suppress this output, which is sometimes useful for debugging purposes, but often clutters up the screen display.
Apart from the output shown above, the program generated the file
solution.vtk, which is in the VTK format that is widely used by many visualization programs today – including the two heavy-weights VisIt and Paraview that are the most commonly used programs for this purpose today.
Using VisIt, it is not very difficult to generate a picture of the solution like this:
It shows both the solution and the mesh, elevated above the \(x\)- \(y\) plane based on the value of the solution at each point. Of course the solution here is not particularly exciting, but that is a result of both what the Laplace equation represents and the right hand side \(f(\mathbf x)=1\) we have chosen for this program: The Laplace equation describes (among many other uses) the vertical deformation of a membrane subject to an external (also vertical) force. In the current example, the membrane's borders are clamped to a square frame with no vertical variation; a constant force density will therefore intuitively lead to a membrane that simply bulges upward – like the one shown above.
VisIt and Paraview both allow playing with various kinds of visualizations of the solution. Several video lectures show how to use these programs. See also video lecture 11, video lecture 32.
If you want to play around a little bit with this program, here are a few suggestions:
Change the geometry and mesh: In the program, we have generated a square domain and mesh by using the
GridGenerator::hyper_cube function. However, the
GridGenerator has a good number of other functions as well. Try an L-shaped domain, a ring, or other domains you find there.
Change the boundary condition: The code uses the Functions::ZeroFunction function to generate zero boundary conditions. However, you may want to try non-zero constant boundary values using
ConstantFunction<2>(1) instead of
ZeroFunction<2>() to have unit Dirichlet boundary values. More exotic functions are described in the documentation of the Functions namespace, and you may pick one to describe your particular boundary values.
Modify the type of boundary condition: Presently, what happens is that we use Dirichlet boundary values all around, since the default is that all boundary parts have boundary indicator zero, and then we tell the VectorTools::interpolate_boundary_values() function to interpolate boundary values to zero on all boundary components with indicator zero.
We can change this behavior if we assign parts of the boundary different indicators. For example, try this immediately after calling GridGenerator::hyper_cube():
What this does is it first asks the triangulation to return an iterator that points to the first active cell. Of course, this being the coarse mesh for the triangulation of a square, the triangulation has only a single cell at this moment, and it is active. Next, we ask the cell to return an iterator to its first face, and then we ask the face to reset the boundary indicator of that face to 1. What then follows is this: When the mesh is refined, faces of child cells inherit the boundary indicator of their parents, i.e. even on the finest mesh, the faces on one side of the square have boundary indicator 1. Later, when we get to interpolating boundary conditions, the VectorTools::interpolate_boundary_values() call will only produce boundary values for those faces that have zero boundary indicator, and leave those faces alone that have a different boundary indicator. What this then does is to impose Dirichlet boundary conditions on the former, and homogeneous Neumann conditions on the latter (i.e. zero normal derivative of the solution, unless one adds additional terms to the right hand side of the variational equality that deal with potentially non-zero Neumann conditions). You will see this if you run the program.
An alternative way to change the boundary indicator is to label the boundaries based on the Cartesian coordinates of the face centers. For example, we can label all of the cells along the top and bottom boundaries with a boundary indicator 1 by checking to see if the cell centers' y-coordinates are within a tolerance (here 1e-12) of -1 and 1. Try this immediately after calling GridGenerator::hyper_cube(), as before:
Although this code is a bit longer than before, it is useful for complex geometries, as it does not require knowledge of face labels.
A slight variation of the last point would be to set different boundary values as above, but then use a different boundary value function for boundary indicator one. In practice, what you have to do is to add a second call to
interpolate_boundary_values for boundary indicator one:
If you have this call immediately after the first one to this function, then it will interpolate boundary values on faces with boundary indicator 1 to the unit value, and merge these interpolated values with those previously computed for boundary indicator 0. The result will be that we will get discontinuous boundary values, zero on three sides of the square, and one on the fourth.
Observe convergence: We will only discuss computing errors in norms in step-7, but it is easy to check that computations converge already here. For example, we could evaluate the value of the solution in a single point and compare the value for different numbers of global refinement (the number of global refinement steps is set in
LaplaceProblem::make_grid above). To evaluate the solution at a point, say at \((\frac 13, \frac 13)\), we could add the following code to the
LaplaceProblem::output_results function:
For 1 through 9 global refinement steps, we then get the following sequence of point values:
By noticing that the difference between each two consecutive values reduces by about a factor of 4, we can conjecture that the "correct" value may be \(u(\frac 13, \frac 13)\approx 0.241384\). In fact, if we assumed this to be the correct value, we could show that the sequence above indeed shows \({\cal O}(h^2)\) convergence — theoretically, the convergence order should be \({\cal O}(h^2 |\log h|)\) but the symmetry of the domain and the mesh may lead to the better convergence order observed.
A slight variant of this would be to repeat the test with quadratic elements. All you need to do is to set the polynomial degree of the finite element to two in the constructor
LaplaceProblem::LaplaceProblem.
LaplaceProblem::output_results:
HDF5 is a commonly used format that can be read by many scripting languages (e.g. R or Python). It is not difficult to get deal.II to produce some HDF5 files that can then be used in external scripts to postprocess some of the data generated by this program. Here are some ideas on what is possible.
To fully make use of the automation we first need to introduce a private variable for the number of global refinement steps
unsigned int n_refinement_steps , which will be used for the output filename. In
make_grid() we then replace
triangulation.refine_global(5); with
The deal.II library has two different HDF5 bindings, one in the HDF5 namespace (for interfacing to general-purpose data files) and another one in DataOut (specifically for writing files for the visualization of solutions). Although the HDF5 deal.II binding supports both serial and MPI, the HDF5 DataOut binding only supports parallel output. For this reason we need to initialize an MPI communicator with only one processor. This is done by adding the following code.
Next we change the
Step3::output_results() output routine as described in the DataOutBase namespace documentation:
The resulting file can then be visualized just like the VTK file that the original version of the tutorial produces; but, since HDF5 is a more general file format, it can also easily be processed in scripting languages for other purposes.
After outputting the solution, the file can be opened again to include more datasets. This allows us to keep all the necessary information of our experiment in a single result file, which can then be read and processed by some postprocessing script. (Have a look at HDF5::Group::write_dataset() for further information on the possible output options.)
To make this happen, we first include the necessary header into our file :
Adding the following lines to the end of our output routine adds the information about the value of the solution at a particular point, as well as the mean value of the solution, to our HDF5 file :
The data put into HDF5 files above can then be used from scripting languages for further postprocessing. In the following, let us show how this can, in particular, be done with the R programming language, a widely used language in statistical data analysis. (Similar things can also be done in Python, for example.) If you are unfamiliar with R and ggplot2 you could check out the data carpentry course on R here. Furthermore, since most search engines struggle with searches of the form "R + topic", we recommend using the specializes service RSeek instead.
The most prominent difference between R and other languages is that the assignment operator (
a = 5) is typically written as
a <- 5. As the latter is considered standard we will use it in our examples as well. To open the
.h5 file in R you have to install the rhdf5 package, which is a part of the Bioconductor package.
First we will include all necessary packages and have a look at how the data is structured in our file.
This gives the following output
The datasets can be accessed by
h5f$name. The function
dim(h5f$cells) gives us the dimensions of the matrix that is used to store our cells. We can see the following three matrices, as well as the two additional data points we added.
cells: a 4x1024 matrix that stores the (C++) vertex indices for each cell
nodes: a 2x1089 matrix storing the position values (x,y) for our cell vertices
solution: a 1x1089 matrix storing the values of our solution at each vertex
Now we can use this data to generate various plots. Plotting with ggplot2 usually splits into two steps. At first the data needs to be manipulated and added to a
data.frame. After that, a
ggplot object is constructed and manipulated by adding plot elements to it.
nodes and
cells contain all the information we need to plot our grid. The following code wraps all the data into one dataframe for plotting our grid:
With the finished dataframe we have everything we need to plot our grid:
The contents of this file then look as follows (not very exciting, but you get the idea):
We can also visualize the solution itself, and this is going to look more interesting. To make a 2D pseudocolor plot of our solution we will use
geom_raster. This function needs a structured grid, i.e. uniform in x and y directions. Luckily our data at this point is structured in the right way. The following code plots a pseudocolor representation of our surface into a new PDF:
This is now going to look as follows:
For plotting the converge curves we need to re-run the C++ code multiple times with different values for
n_refinement_steps starting from 1. Since every file only contains a single data point we need to loop over them and concatenate the results into a single vector.
As we are not interested in the values themselves but rather in the error compared to a "exact" solution we will assume our highest refinement level to be that solution and omit it from the data.
Now we have all the data available to generate our plots. It is often useful to plot errors on a log-log scale, which is accomplished in the following code:
This results in the following plot that shows how the errors in the mean value and the solution value at the chosen point nicely converge to zero: | https://www.dealii.org/developer/doxygen/deal.II/step_3.html | CC-MAIN-2021-25 | refinedweb | 9,381 | 53.24 |
The Master Pages introduced in ASP.NET 2.0 are a great feature, however, they don't provide a good way to perform the most basic search engine optimization. If you want your web pages to be listed properly and ranked well in search engines, then you need to specify good titles and meta tag descriptions on each page. This article explains how to extend the @Page directive on your ASP.NET pages so that you can specify the meta tag description and meta tag keywords on each content page when using master pages.
When optimizing your web pages for search engines, some of the most important elements on the page are the <title> tag and the description meta tag. The <title> and meta tags are usually specified in the <head> section of the HTML on each page as seen in the example below from the Rhinoback online backup site:
<html xmlns="" > <head> <title> Rhinoback Professional Secure Online Backup Services for Small and Medium Business - SMB </title> <meta name="description" content="Professional Online Backup Services. Rhinoback provides robust backup functionality at affordable prices. Premium features, premium services, low prices. Get the most for your money with Rhinoback!" /> <meta name="keywords" content="backup, online backup, secure backup, cheap backup, free backup, offsite backup,internet backup, secure files, offsite data storage, privacy, security, features, low prices, premium service, remote backup" /> </head> <body> <!-- page content --> </body> </html>
The text from the <title> tag is displayed at the top of most browsers. See the title of the Internet Explorer window in the example below:
The meta description text is displayed by the search engine when your page is listed. The example below is from a Google search. The text below the underlined title comes directly from the meta description tag. Without a meta description tag, your page may be listed with a description that was extracted from text found somewhere on the page. It is always better to specify the text for the description of each page rather than leave it up to a search engine robot.
Master Pages were introduced in ASP.NET 2.0 and have proven to be a valuable feature. This article does not attempt to explain the details of master pages or how to implement them, that is well covered in numerous other articles. When using master pages the <head> section is part of the master page and is automatically included on all of the content pages. Fortunately the developers at Microsoft included the Title attribute on the @Page directive that enables the developer to specify the title of the page on the content pages rather than on the master page.
<%@ Page Language="C#" MasterPageFile="~/PageTags.master" AutoEventWireup="true" CodeFile="home.aspx.cs" Inherits="home" Title="My home page title" %>
The @Page directive above is from a content page in an ASP.NET 2.0 website that uses a master page. As discussed above, you may need to specify some meta tags at the content page level rather than at the master page level. You may have discovered that the @Page directive does have a Description attribute, however, it does not create a meta description tag for your page. In fact, anything that you specify on the Description attribute is completely ignored and not used for anything.
In my case, it was completely unacceptable to have the same description on all pages of my site. I also wanted to specify keywords for each page that may vary from page to page. My first cut at a solution to this problem involved using the code-behind to insert the desired meta tags onto each pages' <head> section as shown in the Page_Load method of a content page below:
protected void Page_Load(object sender, EventArgs e) { HtmlMeta tag = new HtmlMeta(); tag.Name = "description"; tag.Content = "My description for this page"; Header.Controls.Add(tag); }
Sub Page_Load(ByVal sender As Object, ByVal e As EventArgs) Dim tag As HtmlMeta = New HtmlMeta() tag.Name = "description" tag.Content = "My description for this page" Header.Controls.Add(tag) End Sub
The problem with this solution is that the title of the page, the meta description, and the content of a page are all related and I really want the title and description to be together and also in the same ASPX file as the content. The
Page_Load method can easily be placed within <script> tags on the ASPX page, but I wanted to find a solution that was easier to maintain and would allow for quick and easy inspection of the tags on each page.
The following solution meets my objectives by extending the @Page directive to include the meta tags that I want to specify for each page individually.
I created a base page class that inherits from
System.Web.UI.Page and then I modified my content pages to inherit from my
BasePage class. The
BasePage class contains the code that adds the meta tags to the header controls collection on the page. Since all of my content pages are inheriting from
BasePage, this code only needs to exist in one place and not on every page.
/* ********************************************** * Page directive extender - base page class * * by Jim Azar - * * **********************************************/ using System; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Text.RegularExpressions; /// <SUMMARY> /// Base class with properties for meta tags for content pages /// </SUMMARY> public class BasePage : Page { private string _keywords; private string _description; // Constructor // Add an event handler to Init event for the control // so we can execute code when a server control (page) // that inherits from this base class is initialized. public BasePage() { Init += new EventHandler(BasePage_Init); } // Whenever a page that uses this base class is initialized // add meta keywords and descriptions if available void BasePage_Init(object sender, EventArgs e) { if (!String.IsNullOrEmpty(Meta_Keywords)) { HtmlMeta tag = new HtmlMeta(); tag.Name = "keywords"; tag.Content = Meta_Keywords; Header.Controls.Add(tag); } if (!String.IsNullOrEmpty(Meta_Description)) { HtmlMeta tag = new HtmlMeta(); tag.Name = "description"; tag.Content = Meta_Description; Header.Controls.Add(tag); } } /// <SUMMARY> /// Gets or sets the Meta Keywords tag for the page /// </SUMMARY> public string Meta_Keywords { get { return _keywords; } set { // strip out any excessive white-space, newlines and linefeeds _keywords = Regex.Replace(value, "\\s+", " "); } } /// <SUMMARY> /// Gets or sets the Meta Description tag for the page /// </SUMMARY> public string Meta_Description { get { return _description; } set { // strip out any excessive white-space, newlines and linefeeds _description = Regex.Replace(value, "\\s+", " "); } } }
'* ********************************************** '* Page directive extender - base page class * '* by Jim Azar - * '* **********************************************/ Imports System Imports System.Web.UI Imports System.Web.UI.HtmlControls Imports System.Text.RegularExpressions ' Base class with properties for meta tags for content pages Public Class BasePage Inherits Page Dim _keywords As String Dim _description As String ' Constructor ' Add an event handler to Init event for the control ' so we can execute code when a server control (page) ' that inherits from this base class is initialized. Public Sub New() AddHandler Init, New EventHandler(AddressOf BasePage_Init) End Sub ' Whenever a page that uses this base class is initialized ' add meta keywords and descriptions if available Sub BasePage_Init(ByVal sender As Object, ByVal e As EventArgs) If Not String.IsNullOrEmpty(Meta_Keywords) Then Dim tag As HtmlMeta = New HtmlMeta() tag.Name = "keywords" tag.Content = Meta_Keywords Header.Controls.Add(tag) End If If Not String.IsNullOrEmpty(Meta_Description) Then Dim tag As HtmlMeta = New HtmlMeta() tag.Name = "description" tag.Content = Meta_Description Header.Controls.Add(tag) End If End Sub 'Gets or sets the Meta Keywords tag for the page Public Property Meta_Keywords() As String Get Return _keywords End Get set ' strip out any excessive white-space, newlines and linefeeds _keywords = Regex.Replace(value, "\\s+", " ") End Set End Property ' Gets or sets the Meta Description tag for the page Public Property Meta_Description() As String Get Return _description End Get Set(ByVal value As String) ' strip out any excessive white-space, newlines and linefeeds _description = Regex.Replace(value, "\\s+", " ") End Set End Property End Class
The
Meta_Keywords and
Meta_Description properties are public and can be set when the class (or derived class) is instantiated. When a page that inherits from this class is initialized, the
Base_Init event handler is invoked and adds the meta tags to the page.
On each content page, simply change the inheritance so that they inherit from
BasePage instead of
page or
System.Web.UI.Page. See below:
public partial class home : BasePage { protected void Page_Load(object sender, EventArgs e) { } }
Partial Class home Inherits BasePage Sub Page_Load(ByVal sender As Object, ByVal e As EventArgs) End Sub End Class
Now that each content page is inheriting from
BasePage, they have the properties and the code to insert the meta tags. Now we can specify the
Meta_Keywords and
Meta_Description values on the @Page directive on the ASPX file. See the examples below:
<%@ Page <h3>My home page content<h3> <p> This is the content on my home page. This page has an appropriate title tag and also has meta tags for keywords and description that are relative to this page. The title tag is essential to good search engine optimization and the meta description is the text that the search engine will display when your page is listed in search results. The title and meta description should be set specific to each page and should describe the content of the page. </p> </asp:Content>
Note the addition of the
CodeFileBaseClass attribute. This is required so that page can reference the public properties specified in the
BasePage class.
You may have noticed the regular expression in the
BasePage class. This is here so that you can break the description and keyword tags up onto mulitple lines in your ASPX file, making them more readable and maintainable. Consider the following example from the IdeaScope Customer Feedback Management website:
<%@ Page Language="C#" MasterPageFile="~/IdeaScope.master" AutoEventWireup="true" CodeFile="is.aspx.cs" Inherits="_is" CodeFileBaseClass="BasePage" Title="Effective Customer Feedback Management, Improve Customer Commmunication" Meta_Keywords="Customer Feedback, Customer Opinion, feedback, opinion, idea, ideas, idea management, customer feedback management, product management, product manager, product marketing, product marketing manager" Meta_Description="IdeaScope is an on-demand and embedded solution that allows you to capture, prioritize and centrally manage customer feedback. Make your customer feedback process more efficient. Save time and involve more stakeholders without significant cost." %>
Without the regular expression replacement, these tags would contain new lines and excess spaces at the beginning and ending of each line. I suspect that some search engine spiders might get upset about that, and the primary reason for including these tags is to make search engines happy.
There is one last thing that you might want to do to tidy up this solution. The ASP.NET validation in Visual Studio 2005 is not going to recognize Meta_Keywords or Meta_Description. You will get warnings from the compiler saying that these are not valid attributes for the @Page directive. You will also see those red squiggley lines under those attributes in Visual Studio. Your code will compile and run fine. If you are like me and don't want to see any warnings or validation errors, then you will want to add the following lines to Visual Studio's schema for the @Page directive.
<xsd:attribute <xsd:attribute
These nodes should be inserted as child nodes of
<xsd:complexType The schema file is located at the following location if you installed Visual Studio 2005 in the default location:
C:\Program Files\Microsoft Visual Studio 8\Common7\Packages\schemas\html\page_directives.xsd
This article demonstrates how to extend the @Page directive to support meta keywords and meta descriptions. You can easily add other meta tags to the sample code. Complete source files and a sample project are included for C# and VB. Thanks to Scott Guthrie's blog post, Obsure but cool feature in ASP.NET 2.0 for the information that led up to this solution.
Your comments and suggestions to improve this article are welcome.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/aspnet/PageTags.aspx | crawl-002 | refinedweb | 1,977 | 53.71 |
InferBound Pass¶
The InferBound pass is run after normalize, and before ScheduleOps build_module.py. The main job of InferBound is to create the bounds map, which specifies a Range for each IterVar in the program. These bounds are then passed to ScheduleOps, where they are used to set the extents of For loops, see MakeLoopNest, and to set the sizes of allocated buffers (BuildRealize), among other uses.
The output of InferBound is a map from IterVar to Range:
Map<IterVar, Range> InferBound(const Schedule& sch);
Therefore, let’s review the Range and IterVar classes:
namespace HalideIR { namespace IR { class RangeNode : public Node { public: Expr min; Expr extent; // remainder ommitted }; }} namespace tvm { class IterVarNode : public Node { public: Range dom; Var var; // remainder ommitted }; }
Note that IterVarNode also contains a Range
dom. This
dom may or may not have a meaningful value, depending on when the IterVar was created. For example, when
tvm.compute is called, an IterVar is created for each axis and reduce axis, with dom’s equal to the shape supplied in the call to
tvm.compute.
On the other hand, when
tvm.split is called, IterVars are created for the inner and outer axes, but these IterVars are not given a meaningful
dom value.
In any case, the
dom member of an IterVar is never modified during InferBound. However, keep in mind that the
dom member of an IterVar is sometimes used as default value for the Ranges InferBound computes.
We next review some TVM codebase concepts that are required to understand the InferBound pass.
Recall that InferBound takes one argument, a Schedule. This schedule object, and its members, contains all information about the program being compiled.
A TVM schedule is composed of Stages. Each stage has exactly one Operation, e.g., a ComputeOp or a TensorComputeOp. Each operation has a list of root_iter_vars, which in the case of ComputeOp, are composed of the axis IterVars and the reduce axis IterVars. Each operation can also contain many other IterVars, but all of them are related by the operations’s list of IterVarRelations. Each IterVarRelation represents either a split, fuse or rebase in the schedule. For example, in the case of split, the IterVarRelation specifies the parent IterVar that was split, and the two children IterVars: inner and outer.
namespace tvm { class ScheduleNode : public Node { public: Array<Operation> outputs; Array<Stage> stages; Map<Operation, Stage> stage_map; // remainder ommitted }; class StageNode : public Node { public: Operation op; Operation origin_op; Array<IterVar> all_iter_vars; Array<IterVar> leaf_iter_vars; Array<IterVarRelation> relations; // remainder ommitted }; class OperationNode : public Node { public: virtual Array<IterVar> root_iter_vars(); virtual Array<Tensor> InputTensors(); // remainder ommitted }; class ComputeOpNode : public OperationNode { public: Array<IterVar> axis; Array<IterVar> reduce_axis; Array<Expr> body; Array<IterVar> root_iter_vars(); // remainder ommitted }; }
Tensors haven’t been mentioned yet, but in the context of TVM, a Tensor represents output of an operation.
class TensorNode : public Node { public: // The source operation, can be None // This Tensor is output by this op Operation op; // The output index from the source operation int value_index; };
In the Operation class declaration above, we can see that each operation also has a list of InputTensors. Thus the stages of the schedule form a DAG, where each stage is a node in the graph. There is an edge in the graph from Stage A to Stage B, if the operation of Stage B has an input tensor whose source operation is the op of Stage A. Put simply, there is an edge from A to B, if B consumes a tensor produced by A. See the diagram below. This graph is created at the beginning of InferBound, by a call to CreateReadGraph.
InferBound makes one pass through the graph, visiting each stage exactly once. InferBound starts from the output stages (i.e., the solid blue nodes in the graph above), and moves upwards (in the opposite direction of the edges). This is achieved by performing a reverse topological sort on the nodes of the graph. Therefore, when InferBound visits a stage, each of its consumer stages has already been visited.
The InferBound pass is shown in the following pseudo-code:
Map<IterVar, Range> InferBound(const Schedule& sch) { Array<Operation> outputs = sch->get_outputs(); G = CreateGraph(outputs); stage_list = sch->reverse_topological_sort(G); Map<IterVar, Range> rmap; for (Stage s in stage_list) { InferRootBound(s, &rmap); PassDownDomain(s, &rmap); } return rmap; }
The InferBound pass has two interesting properties that are not immediately obvious:
- After InferBound visits a stage, the ranges of all IterVars in the stage will be set in
rmap.
- The Range of each IterVar is only set once in
rmap, and then never changed.
So it remains to explain what InferBound does when it visits a stage. As can be seen in the pseudo-code above, InferBound calls two functions on each stage: InferRootBound, and PassDownDomain. The purpose of InferRootBound is to set the Range (in
rmap) of each root_iter_var of the stage. (Note: InferRootBound does not set the Range of any other IterVar, only those belonging to root_iter_vars). The purpose of PassDownDomain is to propagate this information to the rest of the stage’s IterVars. When PassDownDomain returns, all IterVars of the stage have known Ranges in
rmap.
The remainder of the document dives into the details of InferRootBound and PassDownDomain. Since PassDownDomain is simpler to describe, we will cover it first.
IterVar Hyper-graph¶
The InferBound pass traverses the stage graph, as described above. However, within each stage is another graph, whose nodes are IterVars. InferRootBound and PassDownDomain perform message-passing on these IterVar graphs.
Recall that all IterVars of the stage are related by IterVarRelations. The IterVarRelations of a stage form a directed acyclic hyper-graph, where each node of the graph corresponds to an IterVar, and each hyper-edge corresponds to an IterVarRelation. We can also represent this hyper-graph as a DAG, which is simpler to visualize as shown below.
The above diagram shows the IterVar hyper-graph for one stage. The stage has one root_iter_var,
i. It has been split, and the resulting inner axis
i.inner, has been split again. The leaf_iter_vars of the stage are shown in green:
i.outer,
i.inner.outer, and
i.inner.inner.
Message passing functions are named “PassUp” or “PassDown”, depending on whether messages are passed from children to their parent in the DAG (“PassUp”), or from the parent to its children (“PassDown”). For example, the large arrow on the left-hand side of the diagram above, shows that PassDownDomain sends messages from the root IterVar
i to its children
i.outer and
i.inner.
PassDownDomain¶
The purpose of PassDownDomain is to take the Ranges produced by InferRootBound for the root_iter_vars, and set the Ranges of all other IterVars in the stage.
PassDownDomain iterates through the stage’s IterVarRelations. There are three possible types of IterVarRelation: split, fuse, and rebase. The most interesting case (since it offers opportunity for improvement), is IterVarRelations representing splits.
The Ranges of the inner and outer IterVars of the split are set based on the parent IterVar’s known Range, as follows:
rmap[split->inner] = Range::make_by_min_extent(0, split->factor) rmap[split->outer] = Range::make_by_min_extent(0, DivCeil(rmap[split->parent]->extent, split->factor))
There is an opportunity here to tighten the bounds produced by InferBound, when
split->factor does not evenly divide the parent’s extent. Suppose the parent’s extent is 20, and the split factor is 16. Then on the second iteration of the outer loop, the inner loop only needs to perform 4 iterations, not 16. If PassDownDomain could set the extent of
split->inner to
min(split->factor, rmap[split->parent]->extent - (split->outer * split->factor)), then the extent of the inner variable would properly adapt, based on which iteration of the outer loop is being executed.
For Fuse relations, the Range of the fused IterVar is set based on the known Ranges of the inner and outer IterVars, as follows:
rmap[fuse->fused] = Range::make_by_min_extent(0, rmap[fuse->outer]->extent * rmap[fuse->inner]->extent)
InferRootBound¶
Recall that InferBound calls InferRootBound, followed by PassDownDomain on each stage in the stage graph. The purpose of InferRootBound is to set the Range of each root_iter_var of the Stage’s operation. These Ranges will be propagated to the rest of the stage’s IterVars using PassDownDomain. Note that InferRootBound does not set the Range of any other IterVar, only those belonging to the stage’s root_iter_vars.
If the stage is an output stage or placeholder, InferRootBound simply sets the root_iter_var Ranges to their default values. The default Range for a root_iter_var is taken from the
dom member of the IterVar (see the IterVarNode class declaration above).
Otherwise, InferRootBound iterates through the consumers of the stage. IntSets are created for each of the consumer’s IterVars, as follows. Phase 1) IntSets are initialized for the consumer’s leaf_iter_vars, and propagated to the consumer’s root_iter_vars by PassUpDomain (Phase 2). These IntSets are used to create TensorDom of the input tensors of the consumer stage (Phase 3). Finally, once all of the consumers have been processed, InferRootBound calls GatherBound, to set the Ranges of the stage’s root_iter_vars, based on the TensorDoms (Phase 4).
This process can seem complicated. One reason is that a stage can have more than one consumer. Each consumer has different requirements, and these must somehow be consolidated. Similarly, the stage may output more than one tensor, and each consumer only uses a particular subset of these tensors. Furthermore, even if a consumer uses a particular tensor, it may not use all elements of the tensor.
As mentioned above, a consumer may only require a small number of elements from each tensor. The consumers can be thought of as making requests to the stage, for certain regions of its output tensors. The job of Phases 1-3 is to establish the regions of each output tensor that are required by each consumer.
IntSets¶
During InferRootBound, Ranges are converted to IntSets, and message passing is performed over IntSets. Therefore, it is important to understand the difference between Ranges and IntSets. The name “IntSet” suggests it can represent an arbitrary set of integers, e.g., A = {-10, 0, 10, 12, 13}. This would certainly be more expressive than a Range, which only represents a set of contiguous integers, e.g., B = {10,11,12}.
However, currently IntSets come in only three varieties: IntervalSets, StrideSets, and ModularSets. IntervalSets, similarly to Ranges, only represent sets of contiguous integers. A StrideSet is defined by a base IntervalSet, a list of strides, and a list of extents. However, StrideSet is unused, and ModularSet is only used by the frontend.
Therefore, not all sets of integers can be represented by an IntSet in TVM currently. For example, set A in the example above can not be represented by an IntSet. However, in future the functionality of IntSet can be extended to handle more general kinds of integer sets, without requiring modification to users of IntSet.
InferBound is more complicated for schedules that contain compute_at. Therefore, we first explain InferBound for schedules that do not contain compute_at..
For simplicity, we assume the schedule does not contain thread axes. In this case, Case 2 is only relevant if the schedule contains compute_at. Please refer to the section InferBound with compute_at, for further explanation.
Phase 2: Propagate IntSets from consumer’s leaves to consumer’s roots¶
/* * Input: Map<IterVar, IntSet> up_state: consumer leaf -> IntSet * Output: Map<IterVar, IntSet> dom_map: consumer root -> IntSet */
The purpose of Phase 2 is to propagate the IntSet information from the consumer’s leaf_iter_vars to the consumer’s root_iter_vars. The result of Phase 2 is another map,
dom_map, that contains an IntSet for each of the consumer’s root_iter_vars..
Case 2 is only needed if the schedule contains compute_at. Please refer to the section InferBound with compute_at below, for further explanation.
After PassUpDomain has finished propagating up_state to all IterVars of the consumer, a fresh map, from root_iter_vars to IntSet, is created. If the schedule does not contain compute_at, the IntSet for root_iter_var
iv is created by the following code:
dom_map[iv->var.get()] = IntSet::range(up_state.at(iv).cover_range(iv->dom));
Note that if the schedule does not contain compute_at, Phases 1-2 are actually unnecessary. dom_map can be built directly from the known Ranges in rmap. Ranges simply need to be converted to IntSets, which involves no loss of information.
Phase 3: Propagate IntSets to consumer’s input tensors¶
/* * Input: Map<IterVar, IntSet> dom_map: consumer root -> IntSet * Output: Map<Tensor, TensorDom> tmap: output tensor -> vector<vector<IntSet> > */
Note that the consumer’s input tensors are output tensors of the stage InferBound is working on. So by establishing information about the consumer’s input tensors, we actually obtain information about the stage’s output tensors too: the consumers require certain regions of these tensors to be computed. This information can then be propagated through the rest of the stage, eventually obtaining Ranges for the stage’s root_iter_vars by the end of Phase 4.
The output of Phase 3 is tmap, which is a map containing all of the stage’s output tensors. Recall that a Tensor is multi-dimensional, with a number of different axes. For each output tensor, and each of that tensor’s axes, tmap contains a list of IntSets. Each IntSet in the list is a request from a different consumer.
Phase 3 is accomplished by calling PropBoundToInputs on the consumer. PropBoundToInputs adds IntSets to tmap’s lists, for all input Tensors of the consumer.
The exact behavior of PropBoundToInputs depends on the type of the consumer’s operation: ComputeOp, TensorComputeOp, PlaceholderOp, ExternOp, etc. Consider the case of TensorComputeOp. A TensorComputeOp already has a Region for each of its Tensor inputs, defining the slice of the tensor that the operation depends on. For each input tensor i, and dimension j, a request is added to tmap, based on the corresponding dimension in the Region:
for (size_t j = 0; j < t.ndim(); ++j) { // i selects the Tensor t tmap[i][j].push_back(EvalSet(region[j], dom_map)); }
Phase 4: Consolidate across all consumers¶
/* * Input: Map<Tensor, TensorDom> tmap: output tensor -> vector<vector<IntSet> > * Output: Map<IterVar, Range> rmap: rmap is populated for all of the stage's root_iter_vars */
Phase 4 is performed by GatherBound, whose behavior depends on the type of operation of the stage. We discuss the ComputeOp case only, but TensorComputeOp is the same.
A ComputeOp has only a single output Tensor, whose axes correspond to the axis variables of the ComputeOp. The root_iter_vars of a ComputeOp include these axis variables, as well as the reduce_axis variables. If the root IterVar is an axis var, it corresponds to one of the axes of the output Tensor. GatherBound sets the Range of such a root IterVar to the union of all IntSets (i.e., union of all consumer requests) for the corresponding axis of the tensor. If the root IterVar is a reduce_axis, its Range is just set to its default (i.e., the
dom member of IterVarNode).
// 'output' selects the output tensor // i is the dimension rmap[axis[i]] = arith::Union(tmap[output][i]).cover_range(axis[i]->dom);
The union of IntSets is computed by converting each IntSet to an Interval, and then taking the minimum of all minimums, and the maximum of all of these interval’s maximums.
This clearly results in some unnecessary computation, i.e., tensor elements will be computed that are never used.
Unfortunately, even if we’re lucky and the IntervalSet unions do not produce unnecessary computation, the fact that GatherBound considers each dimension of the tensor separately can also cause unnecessary computation. For example, in the diagram below the two consumers A and B require disjoint regions of the 2D tensor: consumer A requires T[0:2, 0:2], and consumer B requires T[2:4, 2:4]. GatherBound operates on each dimension of the tensor separately. For the first dimension of the tensor, GatherBound takes the union of intervals 0:2 and 2:4, producing 0:4 (note that no approximation was required here). Similarly for the second dimension of the tensor. Therefore, the dimension-wise union of these two requests is T[0:4, 0:4]. So GatherBound will cause all 16 elements of tensor T to be computed, even though only half of those elements will ever be used.
InferBound with compute_at¶
If the schedule contains compute_at, Phases 1-2 of InferRootBound become more complex.
Motivation¶
Ex. 1
Consider the following snippet of a TVM program:
C = tvm.compute((5, 16), lambda i, j : tvm.const(5, "int32"), name='C') D = tvm.compute((5, 16), lambda i, j : C[i, j]*2, name='D')
This produces the following (simplified IR):
for i 0, 5 for j 0, 16 C[i, j] = 5 for i 0, 5 for j 0, 16 D[i, j] = C[i, j]*2
It’s easy to see that stage D requires all (5,16) elements of C to be computed.
Ex. 2
However, suppose C is computed at axis j of D:
s = tvm.create_schedule(D.op) s[C].compute_at(s[D], D.op.axis[1])
Then only a single element of C is needed at a time:
for i 0, 5 for j 0, 16 C[0] = 5 D[i, j] = C[0]*2
Ex. 3
Similarly, if C is computed at axis i of D, only a vector of 16 elements of C are needed at a time:
for i 0, 5 for j 0, 16 C[j] = 5 for j 0, 16 D[i, j] = C[j]*2
Based on the above examples, it is clear that InferBound should give different answers for stage C depending on where in its consumer D it is “attached”.
Attach Paths¶
If stage C is computed at axis j of stage D, we say that C is attached to axis j of stage D. This is reflected in the Stage object by setting the following three member variables:
class StageNode : public Node { public: // ommitted // For compute_at, attach_type = kScope AttachType attach_type; // For compute_at, this is the axis // passed to compute_at, e.g., D.op.axis[1] IterVar attach_ivar; // The stage passed to compute_at, e.g., D Stage attach_stage; // ommitted };
Consider the above examples again. In order for InferBound to determine how many elements of C must be computed, it is important to know whether the computation of C occurs within the scope of a leaf variable of D, or above that scope. For example, in Ex. 1, the computation of C occurs above the scopes of all of D’s leaf variables. In Ex. 2, the computation of C occurs within the scope of all of D’s leaf variables. In Ex. 3, C occurs within the scope of D’s i, but above the scope of D’s j.
CreateAttachPath is responsible for figuring out which scopes contain a stage C. These scopes are ordered from innermost scope to outermost. Thus for each stage CreateAttachPath produces an “attach path”, which lists the scopes containing the stage, from innermost to outermost scope. In Ex. 1, the attach path of C is empty. In Ex. 2, the attach path of C contains {j, i}. In Ex. 3, the attach path of C is {i}.
The following example clarifies the concept of an attach path, for a more complicated case.
Ex. 4
C = tvm.compute((5, 16), lambda i, j : tvm.const(5, "int32"), name='C') D = tvm.compute((4, 5, 16), lambda di, dj, dk : C[dj, dk]*2, name='D') s = tvm.create_schedule(D.op) s[C].compute_at(s[D], D.op.axis[2])
Here is the IR after ScheduleOps (note that loops with extent 1 have been preserved, using the
debug_keep_trivial_loop argument of ScheduleOps):
// attr [compute(D, 0x2c070b0)] realize_scope = "" realize D([0, 4], [0, 5], [0, 16]) { produce D { for (di, 0, 4) { for (dj, 0, 5) { for (dk, 0, 16) { // attr [compute(C, 0x2c29990)] realize_scope = "" realize C([dj, 1], [dk, 1]) { produce C { for (i, 0, 1) { for (j, 0, 1) { C((i + dj), (j + dk)) =5 } } } D(di, dj, dk) =(C(dj, dk)*2) } } } } } }
In this case, the attach path of C is {dk, dj, di}. Note that C does not use di, but di still appears in C’s attach path.
Ex. 5
Compute_at is commonly applied after splitting, but this can be handled very naturally given the above definitions. In the example below, the attachment point of C is j_inner of D. The attach path of C is {j_inner, j_outer, i}.
C = tvm.compute((5, 16), lambda i, j : tvm.const(5, "int32"), name='C') D = tvm.compute((5, 16), lambda i, j : C[i, j]*2, name='D') s = tvm.create_schedule(D.op) d_o, d_i = s[D].split(D.op.axis[1], factor=8) s[C].compute_at(s[D], d_i)
The IR in this case looks like:
for i 0, 5 for j_outer 0, 2 for j_inner 0, 8 C[0] = 5 D[i, j_outer*8 + j_inner] = C[0]*2
Building an Attach Path¶
We continue to refer to stages C and D, as introduced in the previous section. The CreateAttachPath algorithm builds the attach path of a stage C as follows. If C does not have attach_type
kScope, then C has no attachment, and C’s attach path is empty. Otherwise, C is attached at attach_stage=D. We iterate through D’s leaf variables in top-down order. All leaf variables starting from C.attach_ivar and lower are added to C’s attach path. Then, if D is also attached somewhere, e.g., to stage E, the process is repeated for E’s leaves. Thus CreateAttachPath continues to add variables to C’s attach path until a stage with no attachment is encountered.
In the example below, C is attached at D, and D is attached at E.[C].compute_at(s[D], D.op.axis[1]) s[D].compute_at(s[E], E.op.axis[1])
With
debug_keep_trivial_loop=True, the attach path of C is {dj, di, ej, ei}, and the attach path of D is {ej, ei}:
// attr [D] storage_scope = "global" allocate D[int32 * 1] // attr [C] storage_scope = "global" allocate C[int32 * 1] produce E { for (ei, 0, 5) { for (ej, 0, 16) { produce D { for (di, 0, 1) { for (dj, 0, 1) { produce C { for (ci, 0, 1) { for (cj, 0, 1) { C[(ci + cj)] = 5 } } } D[(di + dj)] = (C[(di + dj)]*2) } } } E[((ei*16) + ej)] = (D[0]*4) } } }
InferBound with compute_at¶
Now that the concept of an attach path has been introduced, we return to how InferBound differs if the schedule contains compute_at. The only difference is in InferRootBound, Phase 1: Initialize IntSets for consumer’s leaf_iter_vars and Phase 2: Propagate IntSets from consumer’s leaves to consumer’s roots.
In InferRootBound, the goal is to determine Ranges for the root_iter_vars of a particular stage, C. Phases 1-2 of InferRootBound assign IntSets to the leaf IterVars of C’s consumers, and then propagate those IntSets up to the consumers’ root_iter_vars.
If there are no attachments, the Ranges already computed for the consumer’s variables define how much of C is needed by the consumer. However, if the stage is actually inside the scope of one of the consumer’s variables j, then only a single point within the Range of j is needed at a time..
Case 2 occurs if we encounter the attachment point of stage C in the consumer. For this attach_ivar, and all higher leaf variables of the consumer, Case 2 will be applied. This ensures that only a single point within the Range of the leaf variable will be requested, if C is inside the leaf variable’s scope.
Phase 2: Propagate IntSets from consumer’s leaves to consumer’s roots¶
/* * Input: Map<IterVar, IntSet> up_state: consumer leaf -> IntSet * Output: Map<IterVar, IntSet> dom_map: consumer root -> IntSet */.
Now, because the schedule contains compute_at, it is possible for Case 2 to apply. This is because the leaf IntSets may now be initialized to a single point within their Range (Case 2 of Phase 1: Initialize IntSets for consumer’s leaf_iter_vars), so the IntSets will no longer always match the Ranges.
After PassUpDomain has finished propagating up_state to all IterVars of the consumer, a fresh map, from root_iter_vars to IntSet, is created. If the stage is not attached to the current consumer, then for each variable iv in the consumer’s attach_path, iv’s Range is added to a
relax_set. The root variables of the stage are evaluated with respect to this
relax_set.
This is to handle cases like the following example, where C is not attached anywhere, but its consumer D is attached in stage E. In this case, D’s attach_path, {ej, ei} must be considered when determining how much of C must be computed.[D].compute_at(s[E], E.op.axis[1])
for ci 0, 5 for cj 0, 16 C[ci, cj] = 5 for ei 0, 5 for ej 0, 16 D[0] = C[ei, ej]*2 E[ei, ej] = D[0]*4
Limitations of PassUpDomain¶
This section describes known limitations of PassUpDomain. These limitations affect the Ranges produced by InferBound, as well as other users of PassUpDomain such as
tensorize.
Ex. 6
Above, we discussed the behavior of PassUpDomain on Split relations only. In the following example, the schedule contains
fuse in addition to
split. In the TVM program below, the operation C has two axes that are fused, and then the fused axis is split. Note that all tensors are originally of shape
(4, 4) and the fused axis is split by factor
4 as well. Therefore, it would be natural to assume that the effect of the fuse is simply undone by the split. However, this is not the case in TVM, as explained below.
import tvm n = 4 m = 4 A = tvm.placeholder((n, m), name='A') B = tvm.compute((n, m), lambda bi, bj: A[bi, bj]+2, name='B') C = tvm.compute((n, m), lambda ci, cj: B[ci, cj]*3, name='C') s = tvm.create_schedule(C.op) fused_axes = s[C].fuse(C.op.axis[0], C.op.axis[1]) xo, xi = s[C].split(fused_axes, 4) s[B].compute_at(s[C], xo) print(tvm.lower(s, [A, C], simple_mode=True))
The output of this program is shown below. Notice that all 16 elements of B are computed every time through the outer loop, even though C only uses 4 of them.
// attr [B] storage_scope = "global" allocate B[float32 * 16] produce C { for (ci.cj.fused.outer, 0, 4) { produce B { for (bi, 0, 4) { for (bj, 0, 4) { B[((bi*4) + bj)] = (A[((bi*4) + bj)] + 2.000000f) } } } for (ci.cj.fused.inner, 0, 4) { C[((ci.cj.fused.outer*4) + ci.cj.fused.inner)] = (B[((ci.cj.fused.outer*4) + ci.cj.fused.inner)]*3.000000f) } } }
This is in contrast to the following IR, which is produced by modifying the above program by deleting the fuse and split, and replacing the compute_at with
s[B].compute_at(s[C], C.op.axis[0]). Note that in the IR below, only 4 elements of B are computed at a time, as desired. The size of buffer B is also smaller.
// attr [B] storage_scope = "global" allocate B[float32 * 4] produce C { for (ci, 0, 4) { produce B { for (bj, 0, 4) { B[bj] = (A[((ci*4) + bj)] + 2.000000f) } } for (cj, 0, 4) { C[((ci*4) + cj)] = (B[cj]*3.000000f) } } }
This example demonstrates that contrary to what we expect, the split does not simply undo the fuse. So what causes the difference? Why is the entire tensor B re-computed 4 times, when only a single row is actually needed at a time?
Determining the amount of B that must be computed is the responsibility of InferBound. However, the Ranges returned by InferBound for B’s root_iter_vars are too large in this case:
[0, 4] for both
bi and
bj. This occurs because of a limitation in PassUpDomain on Fuse relations, which we explain next.
When InferRootBound is working on stage B, it visits B’s consumer stage C to find out how much of B is requested by C. C has root_iter_vars ci and cj, which have been fused and then split. This results in the following IterVar Hyper-graph for stage C.
We trace the execution of InferRootBound on stage B. Recall that Phase 1: Initialize IntSets for consumer’s leaf_iter_vars of InferRootBound involves setting the IntSets for all leaf_iter_vars of B’s consumer stage C. In this case, C’s leaf_iter_vars are
ci.cj.fused.outer and
ci.cj.fused.inner. Since B is attached at
ci.cj.fused.outer,
ci.cj.fused.inner must be relaxed but
ci.cj.fused.outer is a single point. The IntSets of C’s leaf_iter_vars, after Phase 1: Initialize IntSets for consumer’s leaf_iter_vars, are shown in the following table.
In Phase 2: Propagate IntSets from consumer’s leaves to consumer’s roots of InferRootBound, PassUpDomain is called on all of C’s IterVarRelations in bottom-up order.
PassUpDomain is called on C’s Split node first. Case 2 of PassUpDomain applies, because the IntSet of
ci.cj.fused.outer is just a single point, and doesn’t equal its Range (as previously computed by InferBound on stage C). PassUpDomain therefore sets the IntSet of
ci.cj.fused based on the IntSets of
ci.cj.fused.inner and
ci.cj.fused.outer, as shown in row 3 of the following table.
After PassUpDomain is called on the Split node, it is called on the Fuse node.
- Case 1: the Range of IterVar
fused(i.e., as previously calculated by InferBound) is equal to its IntSet
- Case 2: the IntSet of IterVar
fusedis a single point
- Case 3: otherwise
In our case, the Range of
ci.cj.fused, is [0, 16). This is not equal to the IntSet of
ci.cj.fused, which has extent at most 4 (see row 3 of the table above). Therefore Case 1 does not apply. Case 2 doesn’t apply either, since the IntSet of
ci.cj.fused is not a single point. Therefore, only the default Case 3 applies.
Unfortunately in Case 3, PassUpDomain conservatively applies a “fallback inference rule”, i.e., it just returns IntSets equal to the Ranges of
ci and
cj. Since C is the output stage of the schedule, we know that InferBound will have set the Ranges of the root_iter_vars of C (i.e.,
ci and
cj) to their original dimensions (i.e., the
dom value of their IterVars). The resulting output of PassUpDomain for
ci and
cj is shown in the last two rows of the table below.
This is enough to guarantee that consumer C requests all elements of B: the IntSets of
ci and
cj become requests from consumer C to the output tensors of stage B (via PropBoundToInputs in Phase 3: Propagate IntSets to consumer’s input tensors and GatherBound in Phase 4: Consolidate across all consumers).
This example shows that schedules containing a split of fused axes are difficult to handle in TVM. The source of the difficulty is similar to the limitations of GatherBound. The region of tensor B requested by a consumer C must be a single rectangular region of B. Or, if B has more than two dimensions, the region of B must be expressible as an independent Range for each of its axes.
If the split factor is 4, or 8, in the above example, the region of B needed in each iteration of the outer loop is rectangular.
However, if the split factor is changed from 4 to 3 in the example above, it is easy to see that the region of B that C needs can no longer be described by an independent Range for each of its axes.
The best that can be done with rectangular regions is shown in the following diagram. The orange regions are the minimum rectangular regions covering the region of B that needs to be computed, at each iteration of the outer loop.
| https://docs.tvm.ai/dev/inferbound.html | CC-MAIN-2019-26 | refinedweb | 5,345 | 62.78 |
Most programs require human input to work. And to send these instructions to the computer, people use peripheral devices like a keyboard and mouse. In this tutorial, we will learn how to detect keyboard and mouse input using Python on a Raspberry Pi.
Introduction
According to Wikipedia, a computer program is a set of instructions that can be executed by a computer to perform specific tasks. These tasks can be as mundane as a “Hello World” on a black screen but can also be as complicated as large companies rely on them. Furthermore, most complex programs require personal data to work. That is why learning how to read input from peripheral devices is essential if you’re taking the programming route.
Although there are a few ways to achieve this, in this tutorial, we are going to use Python.
Detecting Keyboard Input
A simple Google search reveals modules that support keyboard and mouse input detection on a Linux system. There’s the keyboard module, ncurses, etc. Unfortunately, they all have characteristics that make them hard to implement on a Raspberry Pi. For instance, the keyboard module requires root user privileges.
I did a few of these methods, and for me, using the standard sys and pygame modules are the easiest to use. Let’s talk more about them in the next section.
Using sys.stdin.read to read from standard input
The sys module provides functions that control particular aspects of the Python runtime environment. It comes as a default to the Python package, so no need to install explicitly.
The main function that we’re going to use from this module is stdin. It is used by the Python interpreter to gain access to the standard input stream. If you’re not familiar with the standard input stream, think of it as a part of a communication channel between a program and its development environment. The whole channel consists of the standard input, standard output, and standard error streams. Stdin accepts text input while Stdout and Stderr get data from the output. A Linux-based system instantly creates these three streams when you execute a command/program.
Code for Keyboard Input
import tty, sys, termios filedescriptors = termios.tcgetattr(sys.stdin) tty.setcbreak(sys.stdin) x = 0 while 1: x=sys.stdin.read(1)[0] print("You pressed", x) if x == "r": print("If condition is met") termios.tcsetattr(sys.stdin, termios.TCSADRAIN,filedescriptors)
Code Explanation
Firstly, import the sys module along with tty and termios modules. Both tty and termios modules are required to make sure the terminal reads the input per character.
Let’s tackle things line by line.
We use
termios.tcgetattr(sys.stdin) to retrieve the current settings of the terminal on the stdin stream. It stores them to variable name filedescriptors.
Then, we modify the input stream using
tty.setcbreak(sys.stdin). On a Linux system, command-line input is set to cooked mode as default. Cooked mode lets you enter words in the terminal. It makes the terminal wait for a newline character (/n) before processing the input. This is why anything you type into a terminal won’t run unless you press enter. We need to change it to raw mode to accommodate single-character commands. We do this with the Python function
tty.setcbreak. Simply put, it calls down the tty driver to tell it to stop buffering.
Thanks to
tty.setcbreak, every keypress now gives an output. Our next problem, however, is deactivating raw mode when it leaves the main loop. We need
termios.tcsetattr to do this. Remember the filedescriptors variable we made earlier? Since it contains the original settings of the tty driver, we only need to tell that to the
termios.tcsetattr to revert to cooked mode.
Using the pygame module
Alternatively, the pygame module can be handy. Pygame, as the name suggests, is a Python module explicitly made to create games. It’s a straightforward library that contains functions for drawing graphics, playing sounds, and handling keyboard and mouse input.
The main difference between sys and pygame is that the former uses a CLI (Command-Line Interface) while the latter uses a GUI (Graphical User Interface). This is also the reason why only Pygame can detect mouse input between the two. CLI is ideal for a minimal setup like a headless configuration (doesn’t require a monitor). A headless configuration only uses a keyboard temporarily when debugging or adding a feature. A mouse is not necessary since most CLI can detect a mouse press, not its motion.
On the other hand, a window is necessary for pygame to work. Without it, you can’t send keyboard and mouse input, or anything else for that matter.
With that, let’s move on to detecting keyboard and mouse input using key, mouse, and get_pressed functions.
Code for Keyboard Input Using KEYDOWN
import pygame pygame.init() window = pygame.display.set_mode((300, 300)) pygame.display.set_caption("Pygame Demonstration") mainloop=True while mainloop: for event in pygame.event.get(): if event.type == pygame.QUIT: mainloop = False if event.type == pygame.KEYDOWN: print(pygame.key.name(event.key)) if event.key == pygame.K_r: print('If condition is met') pygame.quit()
The first three lines besides import are required for all Pygame programs. They initialize, set the dimensions, and label the Pygame window. The main loop waits for an event to happen and checks if it’s a
pygame.KEYDOWN event. KEYDOWN means that a keyboard button is pressed. It’s similar to the KEYUP event, which triggers when a keyboard button is released. If it’s a KEYDOWN event, it prints the key and checks if it’s the letter r. If it is r, it displays, “If condition is met”.
pygame.QUIT tells the program to exit when the close button is pressed. If you forget to include
pygame.QUIT, your Pygame window won’t close even if you press X several times.
Code for Keyboard Input Using get_pressed
import pygame pygame.init() window = pygame.display.set_mode((300,300)) pygame.display.set_caption("Pygame Demonstration") mainloop=True while mainloop: pygame.time.delay(100) for event in pygame.event.get(): if event.type==pygame.QUIT: mainloop=False pressed = pygame.key.get_pressed() buttons = [pygame.key.name(k) for k,v in enumerate(pressed) if v] print(buttons) # print list to console if pressed[pygame.K_r]: print("If condition is met") pygame.quit()
Another way to detect a single key press is by using the
get_pressed() function. The main difference between this and the former is that
get_pressed() returns the status of every keyboard button everytime it is called. This is also the reason why we can’t use
pygame.key.name to detect the key. We have to use a for loop to seek the button that returned HIGH after using the
get_pressed() function.
Code for Mouse Input
import pygame pygame.init() window = pygame.display.set_mode((300,300)) pygame.display.set_caption("Pygame Demonstration") mainloop=True while mainloop: pygame.time.delay(10) for event in pygame.event.get(): if event.type==pygame.QUIT: mainloop=False if event.type==pygame.MOUSEBUTTONDOWN: print("Mouse button is pressed") x,y=pygame.mouse.get_pos() print(x,y) pygame.quit()
Lastly, to detect mouse input, we use the
pygame.mouse functions. Our sample program displays “Mouse button is pressed” on the terminal when it detects a mouse click from the Pygame window. It also reveals the location of the click in x and y coordinates.
Great post and description of how to get instant keyboard input.
When I ran the code for keyboard input (the first example) with Python 3 on my Raspberry Pi, the terminal did not revert to “cooked” mode. It was a simple fix – the last line needs a space after the comma before “filedescriptors”
termios.tcsetattr(sys.stdin, termios.TCSADRAIN,filedescriptors) | https://www.circuitbasics.com/how-to-detect-keyboard-and-mouse-inputs-on-a-raspberry-pi/?recaptcha-opt-in=true | CC-MAIN-2021-39 | refinedweb | 1,304 | 68.67 |
Search Criteria
Package Details: updf-bzr 17-8
Dependencies )
- poppler-glib (poppler-glib-lcdfilter, poppler-glib-git, poppler-glib-lcd)
- python2
- python2-cairo
- python2-numpy
- python2-polib
- python2-rsvg
- bzr (bzr-bzr, breezy) (make)
- python2-distutils-extra (make)
Latest Comments
1 2 Next › Last »
rpodgorny commented on 2019-02-14 20:45
the python2-gobject dep is missing, indeed.
please update or orhan, thank you...
SanskritFritz commented on 2017-02-04 15:11
I have created a python2-rsvg package derived from the original gnome-python-desktop base.
butler360 commented on 2017-01-29 09:53
Looks like python2-rsvg has been deprecated.
dreieck commented on 2016-09-24 12:47
I freshly (re-)installed the package. However, when I want to run it by jus ttyping 'updf', I get the following error:
== %< ==
Traceback (most recent call last):
File "/sbin/updf", line 37, in <module>
from updf import main
ImportError: No module named updf
== >% ==
When I run '/usr/bin/updf' it works.
However, it should also be working when just calling 'updf'. Can you fix it? Perhaps writing a proper wrapper script (which still works when you pass file names with relative paths as arguments)/ a local patch that pathes sources downloaded from upstream?
I have reported to upstream, but they don't seem to care about bugs reported at.
senft commented on 2016-09-03 13:04
I think this should also depend on python2-gobject
ValHue commented on 2016-08-03 10:20
If. | https://aur.tuna.tsinghua.edu.cn/packages/updf-bzr/ | CC-MAIN-2020-40 | refinedweb | 246 | 61.36 |
This lesson will get you started with C# by introducing a few very simple programs.
Here are the objectives of this lesson:
There are basic elements that all C# executable programs have and that's what we'll
concentrate on for this first lesson, starting off with a simple C# program. After
reviewing the code in Listing 1-1, I'll explain
the basic concepts that will follow for all C# programs we will write throughout this tutorial. Please see Listing 1-1
to view this first program.
Warning: C# is case-sensitive.
// Namespace Declaration
using System;
// Program start class
class WelcomeCSS
{
// Main begins program execution.
static void Main()
{
// Write to console
Console.WriteLine("Welcome to the C# Station Tutorial!");
}
}
The program in Listing 1-1 has 4 primary elements, a namespace declaration, a class,
a Main method, and a program statement. It can be compiled with the
following command line:
csc.exe Welcome.cs
This produces a file named Welcome.exe, which can then be executed. Other
programs can be compiled similarly by substituting their file name instead of Welcome.cs.
For more help about command line options, type "csc -help" on the command line.
The file name and the class name can be totally different.
Note for VS.NET Users: The screen will run and close quickly when launching this
program from Visual Studio .NET. To prevent this, add the following code as the
last line in the Main method:
// keep screen from going away
// when run from VS.NET
Console.ReadLine();
Note: The command-line is a window that allows you to run commands and programs
by typing the text in manually. It is often refered to as the DOS prompt, which
was the operating system people used years ago, before Windows. The .NET Framework
SDK, which is free, uses mostly command line tools. Therefore, I wrote this tutorial
so that anyone would be able to use it. Do a search through Windows Explorer for
"csc.exe", which is the C# compiler. When you know its location, add that location
to your Windows path. If you can't figure out how to add something to your path,
get a friend to help you. With all the different versions of Windows available,
I don't have the time in this tutorial, which is about C# language programming,
to show you how to use your operating system. Then open the command window by going
to the Windows Start menu, selecting Run, and typing cmd.exe.
The first thing you should be aware of is that C# is case-sensitive. The word "Main"
is not the same as its lower case spelling, "main". They are different identifiers.
If you are coming from a language that is not case sensitive, this will trip you
up several times until you become accustomed to it.
The namespace declaration, using System;, indicates that you are referencing
the System namespace. Namespaces contain groups of code that can be called
upon by C# programs. With the using System; declaration, you are telling
your program that it can reference the code in the System namespace without
pre-pending the word System to every reference. I'll discuss this in more
detail in Lesson 06: Namespaces, which is dedicated
specifically to namespaces.
The class declaration, class WelcomeCSS, contains the data and
method definitions that your program uses to execute. A class is one of
a few different types of elements your program can use to describe objects, such
as structs, interfaces , delegates, and enums,
which will be discussed in more
detail in
Lesson 12: Structs, Lesson 13: Interfaces,
Lesson 14: Delegates, and Lesson 17: Enums,
respectively. This particular class has no
data, but it does have one method. This method defines the behavior of this class
(or what it is capable of doing). I'll discuss classes more in
Lesson 07: Introduction to Classes. We'll be covering a lot of information
about classes throughout this tutorial.
The one method within the WelcomeCSS class tells what this class
will do when executed. The method name, Main, is reserved for the starting
point of a program. Main is often called the "entry point" and if you ever
receive a compiler error message saying that it can't find the entry point, it means
that you tried to compile an executable program without a Main method.
A static
modifier precedes the word Main, meaning that this method works in this specific class only, rather
than an instance of the class. This is necessary, because when a program
begins, no object instances exist. I'll tell you more about classes, objects, and
instances in Lesson 07: Introduction to Classes.
Every method must have a return type. In this case it is void, which means
that Main does not return a value. Every method also has a parameter list
following its name with zero or more parameters between parenthesis. For simplicity,
we did not add parameters to Main. Later in this lesson you'll see what
type of parameter the Main method can have. You'll learn more about methods
in Lesson 05: Methods.
The Main method specifies its behavior with the Console.WriteLine(...)
statement. Console is a class in the System namespace.
WriteLine(...) is a method in the Console class. We use the ".",
dot, operator to separate subordinate program elements. Note that we could also
write this statement as System.Console.WriteLine(...). This follows the
pattern "namespace.class.method" as a fully qualified statement. Had we left out
the using System declaration at the top of the program, it would have been
mandatory for us to use the fully qualified form System.Console.WriteLine(...).
This statement is what causes the string, "Welcome to the C# Station Tutorial!"
to print on the console screen.
Observe that comments are marked with "//". These are single line comments, meaning
that they are valid until the end-of-line. If you wish to span multiple lines with
a comment, begin with "/*" and end with "*/". Everything in between is part of the
comment. Comments are ignored
when your program compiles. They are there to document what your program
does in plain English (or the native language you speak with every day).
All statements end with a ";", semi-colon. Classes and methods begin with "{", left
curly brace, and end with a "}", right curly brace. Any statements within and including
"{" and "}" define a block. Blocks define scope (or lifetime and visibility) of
program elements.
In the previous example, you simply ran the program and it produced output.
However, many programs are written to accept command-line input. This makes it easier
to write automated scripts that can invoke your program and pass information to
it. If you look at many of the programs, including Windows OS utilities, that you
use everyday; most of them have some type of command-line interface. For example,
if you type Notepad.exe MyFile.txt (assuming the file exists), then the
Notepad
program will open your MyFile.txt file so you can begin editing it. You can make
your programs accept command-line input also, as shown in Listing 1-2, which shows
a program that accepts
a name from the command line and writes it to the console.
Note: When running the NamedWelcome.exe application in Listing 1-2, you
must supply a command-line argument. For example, type the name of the
program, followed by your name: NamedWelcome YourName. This is the purpose of Listing 1-2 - to show
you how to handle command-line input. Therefore, you must provide an argument on
the command-line for the program to work. If you are running Visual
Studio, right-click on the project in Solution Explorer, select Properties,
click the Debug tab, locate Start Options, and type YourName into Command
line arguments. If you forget to to enter YourName on the command-line or
enter it into the project properties, as I just explained, you will receive an
exception that says "Index was outside the bounds of the array." To keep the
program simple and concentrate only on the subject of handling command-line
input, I didn't add exception handling. Besides, I haven't taught you how to add
exception handling to your program yet - but I will. In
Lesson 15: Introduction to Exception Handling, you'll learn more about exceptions and
how to handle them properly.
// Namespace Declaration
using System;
// Program start class
class NamedWelcome
{
// Main begins program execution.
static void Main(string[] args)
{
// Write to console
Console.WriteLine("Hello, {0}!", args[0]);
Console.WriteLine("Welcome to the C#
Station Tutorial!");
}
}
// Namespace Declaration
using System;
// Program start class
class NamedWelcome
{
// Main begins program execution.
static void Main(string[] args)
{
// Write to console
Console.WriteLine("Hello, {0}!", args[0]);
Console.WriteLine("Welcome to the C#
Station Tutorial!");
}
}
In Listing 1-2, you'll notice an entry in the Main method's parameter list.
The parameter name is args, which you'll. Anytime you add string[] args
to the parameter list of the Main method, the C# compiler emits code that
parses command-line arguments and loads the command-line arguments into args. By
reading args, you have access to all arguments, minus the application
name, that were typed on the command-line.
You'll also notice an additional Console.WriteLine(...) statement within
the Main method. The argument list within this statement is different than. Hold that thought, and now we'll
look at the next argument following the end quote.
The args[0] argument refers to the first string in the args
array. The first element of an Array is number 0, the second is number 1, and so
on. For example, if I typed NamedWelcome Joe on the command-line, the value of
args[0] would be "Joe".
This is a little tricky because you know that you typed NamedWelcome.exe on the
command-line, but C# doesn't include the executable application name in the args
list - only the first parameter after the executable application.
Returning to the embedded "{0}" parameter in the formatted string:
Since args[0] is the first argument, after the formatted string, of the
Console.WriteLine() statement, its value will be placed into the first
embedded parameter of the formatted string. When this command is executed, the value
of args[0], which is "Joe" will replace "{0}" in the formatted
string. Upon execution of the command-line with "NamedWelcome Joe", the output will
be as follows:
Hello, Joe! Welcome to the C# Station Tutorial!
Besides command-line input, another way to provide input to a program is via the Console. Typically, it works
like this: You prompt the user for some input, they type something in and press
the Enter key, and you read their input and take some action. Listing
1-3 shows how to obtain interactive input from the user.
In Listing 1-3, the Main method doesn't have any parameters -- mostly because
it isn't necessary this time. Notice also that I prefixed the Main method
declaration with the public keyword. The public keyword means that
any class outside of this one can access that class member. For Main, it
doesn't matter because your code would never call Main, but as you go
through this tutorial, you'll see how you can create classes with members that
must be public so they can be used. The default access is private,
which means that only members inside of the same class can access it. Keywords
such as public and private are referred to as access modifiers.
Lesson 19 discusses access
modifiers in more depth.
There are three statements inside of Main.
The second statement doesn't write anything until its arguments are properly evaluated.
The first argument after the formatted string is Console.ReadLine().
This causes the program to wait for user input at the console. After the user types
input, their name in this case, they must press the Enter key. The return value
from this method replaces the "{0}" parameter of the formatted string and
is written to the console. This line could have also been written like this:
string name = Console.ReadLine();
Console.Write("Hello, {0}! ", name);
The last statement writes to the console as described earlier. Upon execution
of the command-line with "InteractiveWelcome", the output will be as follows:
Now you know the basic structure of a C# program. using statements let
you reference a namespace and allow code to have shorter and more readable notation.
The Main method is the entry point to start a C# program. You can capture
command-line input when an application is run by reading items from a string[]
(string array) parameter to your Main method. Interactive I/O can be performed with
the ReadLine, Write and WriteLine methods of the Console
class.
This is just the beginning, the first of many lessons. I invite you back to take
Lesson 2: Expressions, Types, and Variables.
Your feedback and constructive contributions are welcome. Please feel free
to contact me for feedback or comments you may have about this lesson.
Feedback
I like this site and want to support it! | http://csharp-station.com/Tutorials/Lesson01.aspx | crawl-002 | refinedweb | 2,187 | 65.32 |
A first hand look from the .NET engineering teams
This post was written by Rich Lander, a Program Manager on the .NET Framework team. He’s also the one posting as @DotNet on Twitter.?
My feature request is for proper documentation for the .NET components released since version 2.0.
I switched to .NET because I saw that almost all classes, methods and properties were well documented, often with remarks on usage, and examples.
Now almost no new features have a reasonable level of documentation.
For the start here's an interesting bug in AtatchedProperty behavior in WinRT in Windows 8.1:
social.msdn.microsoft.com/.../is-this-a-bug-in-winrt-propertychangedcallback-is-not-raised-when-changing-attached-property-value
And also another issue - this Url is crashing WebView both in Windows 8 and Windows 8.1, and the Metro IE11 as well. This should not been happening I guess. The problematic Url:
""
Note this page works just fine in Chrome or desktop IE11
I would like to see SIMD support for the JIT compiler and a way to allocate string objects without the need to copy it on the managed heap so I could use an unmanaged buffer to parse the data to become faster at parsing data. The same would be nice for byte arrays. The biggest limitation of .NET (in my opinon) to get high throughput are the memory copy operations.
I'm really looking forward to seeing you guys improve debugging further by supporting lambda expressions (and hopefully integrating Roslyn's C# interactive window to replace the immediate window). This would save me eons of time as I work with a lot of data and not being able to write LINQ queries while debugging really slows things down.
I also hope you work on making the debugger more reliable, as I frequently get "Function evaluation disabled because a previous function evaluation timed out" errors and they inevitably lead to needing to restart the whole debugging session. There has to be a better way to recover than this, that doesn't involve losing your current execution context. The goal you should work towards is that you never have to leave the debugger in order to write an application.
I agree with Jon that documentation really needs to be improved. No matter how many downvotes a page on MSDN gets, or regardless of the inaccuracy or lack of depth of this in the article, I have yet to ever see a single article actually get improved.
Related to this, I'd like to see an HTML documentation generation tool built right into the core Visual Studio product (as it is with Java and Eclipse). You guys abandoned Sandcastle years ago and I've found all the 3rd party tools to be buggy and unnecessarily difficult to use. Encouraging good documentation should start at the IDE level.
On the purely framework side, I'd like to see the immutable collections eventually integrated in the core product. 95% of developers will never know they exist or use them unless they're put in .NET proper. It would also be nice if some of the already-existing parts of the framework were given some attention- more LINQ methods, make MatchCollection implement IEnumerable of T, and please, please deprecate the non-generic collections so developers get warnings and are thus forced to stop using them (including some of your own!)
Finally, I hope overhauling UserVoice is on someone's to-do list. The 10 votes max with 1-3 at a time system is completely ridiculous. CodePlex's one-person one-vote system makes a lot more sense (though it still lacks the ability to withdraw your vote if you change your mind).
My future request - NEVER have a new version of the framework overlay an older version again.
When 4.5 was released, we had to back our code down to 3.5 to the code we produce for our users will be stable because someone made the decision to overlay 4.0. We can't code for workarounds since we can't tell if our code is running under 4.0 or 4.5.
Then, someone made the decision to have 4.5.1 overlay 4.5 and again the bugs are hidden from us and we can't code around anything. My guess is that this was done because of the criticism aimed at MS because of the amount of storage that the OS and other components take up on Surface RT and Pro. If (a big "if") I am correct, then you have managed to make developers lives a living h3ll just to save a few bits on a failed device.
We have already started down the path of building our own dev environment. It is about 80% done. It will allow us to take a single model and output code for .Net, iOS, or Android. If the next version overlays any other version, we will officially be done with .Net. We will still support the current 3.5 based code for our customers, but there will be no forward path.
I love how EF5 seamlessly does what a person used to have work harder at to get the POCOs. The project I am presently doing is so much nicer due to how EF5 works, and how much cleaner it lets my code and my development experience be.
I just hope the team keeps pursuing that kind of result in .NET in general.
While I do appreciate the effort to drum up support for Uservoice requests I do have some gripes with the UserVoice service. Here they are in an effort to help you feel our pain.
1. Clearly there are lots of duplicated requests. It would be grand if you had a team of summer students coalesce them or at least propose such a change to someone more knowledgeable of the technologies involved and approve/deny the combinations.
2. It's pretty painful to spread 10 votes across all of Visual Studio especially given that the IDE warrants votes, the separate languages warrant votes, .NET should be a separate category altogether. How can I weigh "Move WPF to DX11" against "Parallelize the C++ Linker" against "Bring back color icons" etc. I hope you feel the pain there.
3. Speaking of WPF, another issue I have with UserVoice is issue resolution. Take this beauty for example: visualstudio.uservoice.com/.../2216399-create-a-native-wpf-ui-library
A wonderful request but an awful resolution. The whole issue revolves around us wanting a richer way to interact with WPF. Our scientific desktop app which uses C++ and WPF would really rock with this. The resolution of "Completed: Write a Metro style application with C++/CX" is simply unacceptable and shows a disdain and lack of understanding of the whole issue.
So what do I do now with that issue? Open another one with the same request? We'll never be a windows store app and this puts forth an attitude of "Desktop developer? Get lost" from MS (which I'm sure you don't intend but that's certainly the message we're receiving).
John
I agree with @jschroedl - right now most of MS UserVoice pages looks like big unattended playgrounds. Lot of duplicate entries, very few responses or progress. Lot of types of entries in one large UserVoice page. Some management here is clearly required.
Twitter is NOT a medium for proper feedback for a platform the drives a billion dollar industry. UserVoice is great but there's no place for bugs, and no liability for these bug reports or expected follow up. This is what connect is great for, and it's beyond me why you are not using this.
oh and yes your DX support in WPF is absolutely horrible and an atrocity. D9DImage? Really?
Unfortunately, I have not the feeling that the feedback at “Visual Studio and .NET Framework Connect” is considered much. I appreciate new features in .NET but it would be much more important to solve some serious issues which are posted on Connect (especially regarding WPF). Most of these issues are “Closed” because it was not enough time for the release but then they are never considered again for a later release.
Most horrific is the fact that a lot old Connect issues are not online anymore. The only way to see them is via Google Web Cache.
This should not be the way how you handle customer feedback :-(.
- update documentation - not only does MSDN lack proper samples and descriptions, but XML-doc comments in are rather bad
- don't push new features until you've gone back and fixed the old ones.
- improve the performance of commonly used classes/scenarios (string, decimal, reflection, etc)
- make it possible to cast generic function result directly, e.g. not return (T)(object)someClass, but (T)someClass.
@Jaanus -- You said:
"make it possible to cast generic function result directly, e.g. not return (T)(object)someClass, but (T)someClass."
That's what generic constraints are for. If you constraint T to some interface or class, in terms of someClass, then this will work. No?
@Morten -- Agreed. Twitter is not a good solution for all of our feedback. We largely use it for a certain set of things, like publishing our blog post links, and responding to real-time feedback. It works well for that. Oh, we sometimes respond to feedback from @DotMorten there too. ;)
Hello LS,
Thank you for your feedback. In the world of Solid State Drives, hard drive space is no longer free. It's not very practical to install multiple versions of .NET Framework on the machine. We are receiving feedback from consumers and enterprises. Since .NET 4.5 is preinstalled on Windows 8, ISVs are seeing smaller install times for their app since they don't need to install .NET Framework. This improves end user experience with your apps and Windows. Multiple .NET versions on the machine will lead to installation of separate updates for each .NET version present on the machine. Side by side frameworks also lead to higher RAM usage because .NET files cannot be shared across processes. Having said that, we understand that compatibility is highest priority to accrue all these benefits. We have been working with hundreds of companies and we have received very good feedback around .NET compatibility. Large Microsoft products are running successfully on .NET 4.5. There are large, medium and small businesses that have also provided us positive reports on .NET 4.5. I'm sorry if you ran into any issue, I'd like to understand the issue that ran into. Could you please contact me on netfx45compat at Microsoft dot com to discuss? I appreciate your feedback. We want to help you build great apps with great end user experience.
Thank you,
Varun Gupta
.NET Framework
WPF - Please add the ability to define max selected items on a listbox and make selected items bindable | http://blogs.msdn.com/b/dotnet/archive/2013/07/25/advise-the-net-framework-team.aspx?PageIndex=1 | CC-MAIN-2015-48 | refinedweb | 1,833 | 65.22 |
core dump problem
- future_jimmy last edited by
I was running a simple pymesh with 3 LoPy4s.
I had encountered the kernel panic core dump guru meditation problem until I pycom.wifi_on_boot(False). This seemed to stop the devices from crashing.
However, one of the devices crashed and I can't get to bootup properly. Below is the dump from when it crashes
I've tried the following:
flashing the Expansion Board 3.1 with thefirmware expansion31_0.0.11.dfu
flashing the LoPy4 with the licensed version of the firmware LoPy4-1.20.1.r2
flashing the Expansion Board 3.1 with thefirmware expansion31_0.0.11.dfu
flashing the LoPy4 with using the most recent firmware from the PyCom firmware gui
I've also tried wiping the LoPy4 just using os.fsformat("/flash").
After all doing all of those (and some combinations of the above), I still get the dump below.
Further below I've attached the code that was running on the device when it was running. I haven't really changed much from the example online other than the following:
- turning wifi on boot to false
- sending a message to a node in the network
The main.py code was running fairly well, so I'm confused!164 load:0x4009fa00,len:19944 entry 0x400a05e8 MAC ok 3 Settings: {'ble_api': True, 'autostart': True, 'MAC': 3, 'debug': 5, 'LoRa': {'sf': 7, 'region': 5, 'freq': 863000000, 'bandwidth': 2}, 'ble_name_prefix': 'PyGo ', 'Pymesh': {'key': '112233'}} Guru Meditation Error: Core 0 panic'ed (LoadProhibited). Exception was unhandled. Core 0 register dump: PC : 0x40090fa8 PS : 0x00060033 A0 : 0x40092c33 A1 : 0x3ffbb700 A2 : 0x3ffc42bc A3 : 0x7fffffff A4 : 0x00050023 A5 : 0x00000000 A6 : 0x3ffdd8c8 A7 : 0x00000000 A8 : 0x3ffc412c A9 : 0x00000000 A10 : 0x00000001 A11 : 0x3ffb4d38 A12 : 0x00000000 A13 : 0x00000005 A14 : 0x3ffc4030 A15 : 0x3ffc4134 SAR : 0x00000012 EXCCAUSE: 0x0000001c EXCVADDR: 0x0000000c LBEG : 0x400ec0c4 LEND : 0x400ec103 LCOUNT : 0x00000000 Backtrace: 0x40090fa8:0x3ffbb700 0x40092c30:0x3ffbb720 0x40092be6:0x00000000 Guru Meditation Error: Core 0 panic'ed (LoadProhibited). Exception was unhandled. Core 0 register dump: PC : 0x400930fa PS : 0x00060033 A0 : 0x800926a5 A1 : 0x3ffbb3a0 A2 : 0x3ffbb3f0 A3 : 0x3ffbb3c0 A4 : 0x00000020 A5 : 0x3ffc412c A6 : 0x00000002 A7 : 0x3ffbb800 A8 : 0x00000000 A9 : 0x3ffdf7c0 A10 : 0x3ffbb408 A11 : 0x3ffc4134 A12 : 0x3ffdf7c0 A13 : 0x3ffe3130 A14 : 0x00000001 A15 : 0x3ffbb6f0 SAR : 0x0000000a EXCCAUSE: 0x0000001c EXCVADDR: 0x0000000c LBEG : 0x40098758 LEND : 0x40098763 LCOUNT : 0x00000000 Backtrace: 0x400930fa:0x3ffbb3a0 0x400926a2:0x3ffbb3c0 0x40093fd1:0x3ffbb3f0 0x4009431e:0x3ffbb5c0 0x40093c36:0x3ffbb600 0x40093e8a:0x3ffbb620 0x4008395e:0x3ffbb640 0x40090fa5:0x3ffbb700 0x40092c30:0x3ffbb720 0x40092be6:0x00000000 Re-entered core dump! Exception happened during core dump! Rebooting... ets Jun 8 2016 00:22:57
import pycom import time from network import LoRa import time import socket pycom.wifi_on_boot(False) try: from pymesh_config import PymeshConfig except: from _pymesh_config import PymeshConfig try: from pymesh import Pymesh except: from _pymesh import Pymesh def new_message_cb(rcv_ip, rcv_port, rcv_data): ''' callback triggered when a new packet arrived ''' #print('Incoming %d bytes from %s (port %d):' % # (len(rcv_data), rcv_ip, rcv_port)) #print(rcv_data) time.sleep(1) # user code to be inserted, to send packet to the designated Mesh-external interface for _ in range(3): pycom.rgbled(0x888888) time.sleep(.2) pycom.rgbled(0) time.sleep(.1) return pycom.heartbeat(False) # read config file, or set default values pymesh_config = PymeshConfig.read_config() #initialize Pymesh pymesh = Pymesh(pymesh_config, new_message_cb) mac = pymesh.mac() while not pymesh.is_connected(): #print(pymesh.status_str()) time.sleep(1) def new_br_message_cb(rcv_ip, rcv_port, rcv_data, dest_ip, dest_port): ''' callback triggered when a new packet arrived for the current Border Router, having destination an IP which is external from Mesh ''' print('Incoming %d bytes from %s (port %d), to external IPv6 %s (port %d)' % (len(rcv_data), rcv_ip, rcv_port, dest_ip, dest_port)) print(rcv_data) # user code to be inserted, to send packet to the designated Mesh-external interface # ... return while True: pymesh.send_mess(1, "msg from 3") time.sleep(10)
Dear Jim,
I was running a simple pymesh with 3 LoPy4s.
Unless you would be running Pymesh, you might be able to succeed following the same route as @sita within [1].
So, if you don't absolutely depend on Pymesh, you might want to try one of our Dragonfly firmware builds [2]. Some have been successful mitigating spurious core panics with it.
If core panics are still happening, I will be happy to receive respective core dumps from it. If you could share the MicroPython code it would even be better in order to reproduce the problem.
With kind regards,
Andreas.
[1]
[2] | https://forum.pycom.io/topic/5423/core-dump-problem | CC-MAIN-2020-40 | refinedweb | 719 | 54.02 |
Description
Like everyone else, cows like to stand close to their friends when queuing for feed. FJ has N (2 <= N <= 1,000) cows numbered 1..N standing along a straight line waiting for feed. The cows are standing in the same order as they are numbered, and since they can be rather pushy, it is possible that two or more cows can line up at exactly the same location (that is, if we think of each cow as being located at some coordinate on a number line, then it is possible for two or more cows to share the same coordinate).
Some cows like each other and want to be within a certain distance of each other in line. Some really dislike each other and want to be separated by at least a certain distance. A list of ML (1 <= ML <= 10,000) constraints describes which cows like each other and the maximum distance by which they may be separated; a subsequent list of MD constraints (1 <= MD <= 10,000) tells which cows dislike each other and the minimum distance by which they must be separated.
Your job is to compute, if possible, the maximum possible distance between cow 1 and cow N that satisfies the distance constraints.
Input
Line 1: Three space-separated integers: N, ML, and MD.
Lines 2..ML+1: Each line contains three space-separated positive integers: A, B, and D, with 1 <= A < B <= N. Cows A and B must be at most D (1 <= D <= 1,000,000) apart.
Lines ML+2..ML+MD+1: Each line contains three space-separated positive integers: A, B, and D, with 1 <= A < B <= N. Cows A and B must be at least D (1 <= D <= 1,000,000) apart.
Output
Line 1: A single integer. If no line-up is possible, output -1. If cows 1 and N can be arbitrarily far apart, output -2. Otherwise output the greatest possible distance between cows 1 and N.
Sample Input
4 2 1
1 3 10
2 4 20
2 3 3
Sample Output
27
Hint
Explanation of the sample:
There are 4 cows. Cows #1 and #3 must be no more than 10 units apart, cows #2 and #4 must be no more than 20 units apart, and cows #2 and #3 dislike each other and must be no fewer than 3 units apart.
The best layout, in terms of coordinates on a number line, is to put cow #1 at 0, cow #2 at 7, cow #3 at 10, and cow #4 at 27.
Source
USACO 2005 December Gold
题意(从网上扒来的)
n头牛编号为1到n,按照编号的顺序排成一列,每两头牛的之间的距离 >= 0。这些牛的距离存在着一些约束关系:1.有ml。2.有md。问如果这n头无法排成队伍,则输出-1,如果牛[1]和牛[n]的距离可以无限远,则输出-2,否则则输出牛[1]和牛[n]之间的最大距离。
代码
#include <cstdio> #include <cstring> #include <algorithm> using namespace std; const int N = 200010; struct Edge { int v, w, next; } e[N << 1]; int n, ml, md, num = 0, a[N], h[N << 1]; void add(int u, int v, int w) { num ++; e[num].v = v; e[num].w = w; e[num].next = h[u]; h[u] = num; } int head = 0, tail = 1, cnt[N], vis[N], dis[N], queue[N << 3]; int spfa(int s) { memset(cnt, 0, sizeof(cnt)); queue[1] = s; vis[s] = cnt[s] = 1; dis[s] = 0; while(head < tail) { int u = queue[++ head]; vis[u] = false; for(int i = h[u]; i; i = e[i].next) { int v = e[i].v; if(dis[v] > dis[u] + e[i].w) { dis[v] = dis[u] + e[i].w; if(! vis[v]) { if(++ cnt[v] > n) return -1; vis[v] = true; queue[++ tail] = v; } } } } if(dis[n] == 0x3ffffff) return -2; return dis[n]; } int main() { while(~ scanf("%d %d %d", &n, &ml, &md)) { memset(h, 0, sizeof(h)); for(int i = 1; i <= n; i ++) dis[i] = 0x3ffffff, add(i, i - 1, 0); while(ml --) { int a, b, w; scanf("%d %d %d", &a, &b, &w); add(a, b, w); } while(md --) { int a, b, w; scanf("%d %d %d", &a, &b, &w); add(b, a, -w); } printf("%d\n", spfa(1)); } return 0; } | https://blog.csdn.net/u010379542/article/details/78220388 | CC-MAIN-2018-26 | refinedweb | 688 | 76.35 |
Scala/Higher-order functions 1
Higher-order functions are functions that either takes as argument or returns a function. By using functions instead of simpler values, higher-order functions become very flexible. A simple example is testing whether at least one element in a list passes some test. By using an existing higher-order function defined for List, we don't have to write the code that tests each element, but only have to write a function that contains the test itself. For instance, assume that we want to test whether some list contains the number 4:
def isEqualToFour(a:Int) = a == 4 val list = List(1, 2, 3, 4) val resultExists4 = list.exists(isEqualToFour) println(resultExists4) //Prints "true".
In the above example, we first define our function that contains the test (equality to 4). We then define our list that happens to contain the number 4 (meaning that the final result should be true). In the third line, we call the method "exists", which takes our function containing the test, apply it to the elements of the list, and returns whether the function was true for at least one of the elements. Since the list indeed contains 4, the final result is true, which is also what is printed.
If we instead wanted to test whether all the numbers in the list is equal to 4 (which is clearly false), we would use the "forall" method instead. "forall" tests whether the given function is true for every single element in the list.
val resultForall4 = list.forall(isEqualToFour) println(resultForall4) //Prints "false".
As expected, the result is false. Note that we didn't have to redefine the function containing the test. By separating the testing into a test function ("isEqualToFour"), and the logic that applies the test function into higher-order functions ("exists" and "forall"), we avoid a considerable amount of duplication.
Another common higher-order function is that of "map". Let's say that you have a list of numbers and want to change each number in the list independently, for instance multiplying each number by some constant. That is exactly what "map" does: it takes a transformation function and applies it to each element independently to create a new list. Let's see it in action:
def multiplyBy42(a:Int) = 42*a val resultMultiplyBy42 = list.map(multiplyBy42) println(resultMultiplyBy42) //Prints "List(42, 84, 126, 168)".
By using "map" we avoid having to apply the function to each element and to construct the new list ourselves.
There are plenty of other higher-order functions defined for not just List, but most of the other collections, as well as for other classes in the Scala library. Some of the note-worthy functions include "reduce" and "foldLeft"/"foldRight".
"reduce" takes a function that takes two elements and combine them somehow into a new element of the same type, and keeps doing that until there is only one, resulting element. Examples of uses of "reduce" includes cases such as when you want to find the sum or the total product of some numbers, or want to combine a lot of strings into one string, maybe by putting something like "\n", "," or ";" between subsequent strings.
"foldLeft"/"foldRight" are basically sequential transformations. While "map" takes each element and transforms it independently, the folds goes through the collection sequentially, taking each element and the previous result, and transforming it into a new result (such as a new list or a sum). The folds are more difficult to use than "map" and "reduce", but are more flexible, and can in fact be used to define both "map" and "reduce" themselves. The "Left" and "Right" refers to which direction the fold goes through the elements. | http://en.wikibooks.org/wiki/Scala/Higher-order_functions_1 | CC-MAIN-2014-52 | refinedweb | 618 | 60.75 |
Hi Philippe,
Philippe GIL wrote:
> A new SVG Editor using batik is now available. It has an LGPL license.
> It is available for download (runtime or source) at
>
Cool. If you would like to submit a patch for the xdocs/index.xml
to reference your project that would be great.
> ITRIS () a small french company had developped for
> one of its products an SVG Editor based on the top class Batik toolkit.
> (thanks to the batik team for their great job).
I was playing with it but I couldn't figure out how to create
a curved cubic curve (I could select the cubic tool but I couldn't
"drag out" the control points).
> This editor (using a proprietary plugin implementing an additionnal
> namespace) is used to edit graphics that are animated by real time data
> and user actions.
> It's part of a SCADA/HMI application connected to Programmable Logic
> Controllers in industrial processes. A run-time application, (also based
> on batik) displays the animated SVG.
---------------------------------------------------------------------
To unsubscribe, e-mail: batik-users-unsubscribe@xml.apache.org
For additional commands, e-mail: batik-users-help@xml.apache.org | http://mail-archives.us.apache.org/mod_mbox/xmlgraphics-batik-users/200412.mbox/%3C41B2FF18.1010200@Kodak.com%3E | CC-MAIN-2020-29 | refinedweb | 188 | 66.74 |
This method returns the canonical representation for the string object. Intern string have an entry in the global string pool. Initially the pool is empty and the and the strings are maintained privately by the class string. When the intern method is invoked, if the pool already contains a string then the string from the pool is returned. Otherwise String object is added to the pool and a reference to this String object is returned.
Example Program
public class Index3 { public static void main(String[] args) { String s1="welcome to java world"; String s2="Canonical Representation"; String s3=s1.intern(); String s4=s2.intern(); System.out.println("Intern value of s3:"+s3); System.out.println("Intern value of s4:"+s4); if(s1==s2) { System.out.println("Both Strings (s1 & s2)are equal"); } else { System.out.println("Both Strings(s1 & s2) are not equal"); } if(s1==s3) { System.out.println("Both Strings (s1 & s3)are equal"); } else { System.out.println("Both Strings(s1 & s2) are not equal"); } if(s2==s4) { System.out.println("Both Strings (s2 & s4)are equal"); } else { System.out.println("Both Strings(s2 & s4) are not equal"); } } }
Output
Intern value of s3:welcome to java world
Intern value of s4:Canonical Representation
Both Strings (s1 & s2) are not equal
Both Strings (s1 & s3)are equal
Both Strings (s2 & s4)are equal
Description:
In this program String s1 and s2 has assigned a values. The string s1 and s2 values is copied to another strings s3 and s4 using intern( ) and the values are stored in an empty pool and then the values are checked to find whether they are equal or not. If the strings are not in the pool then they are added into a pool but if the values are already in the pool,then the object is stored and the string values are returned. | http://candidjava.com/string-object-in-string-pool-string-intern/ | CC-MAIN-2017-09 | refinedweb | 309 | 64.51 |
import org.apache.log4j.Appender; 20 import org.apache.log4j.Logger; 21 22 23 /** 24 * Appenders may delegate their error handling to 25 * <code>ErrorHandlers</code>. 26 * <p> 27 * Error handling is a particularly tedious to get right because by 28 * definition errors are hard to predict and to reproduce. 29 * </p> 30 * <p> 31 * Please take the time to contact the author in case you discover 32 * that errors are not properly handled. You are most welcome to 33 * suggest new error handling policies or criticize existing policies. 34 * </p> 35 */ 36 public interface ErrorHandler { 37 38 /** 39 * Add a reference to a logger to which the failing appender might 40 * be attached to. The failing appender will be searched and 41 * replaced only in the loggers you add through this method. 42 * 43 * @param logger One of the loggers that will be searched for the failing 44 * appender in view of replacement. 45 * @since 1.2 46 */ 47 void setLogger(Logger logger); 48 49 50 /** 51 * Equivalent to the {@link #error(String, Exception, int, 52 * LoggingEvent)} with the the event parameter set to 53 * <code>null</code>. 54 * 55 * @param message The message associated with the error. 56 * @param e The Exception that was thrown when the error occurred. 57 * @param errorCode The error code associated with the error. 58 */ 59 void error(String message, Exception e, int errorCode); 60 61 /** 62 * This method is normally used to just print the error message 63 * passed as a parameter. 64 * 65 * @param message The message associated with the error. 66 */ 67 void error(String message); 68 69 /** 70 * This method is invoked to handle the error. 71 * 72 * @param message The message associated with the error. 73 * @param e The Exception that was thrown when the error occurred. 74 * @param errorCode The error code associated with the error. 75 * @param event The logging event that the failing appender is asked 76 * to log. 77 * @since 1.2 78 */ 79 void error(String message, Exception e, int errorCode, LoggingEvent event); 80 81 /** 82 * Set the appender for which errors are handled. This method is 83 * usually called when the error handler is configured. 84 * 85 * @param appender The appender 86 * @since 1.2 87 */ 88 void setAppender(Appender appender); 89 90 /** 91 * Set the appender to fallback upon in case of failure. 92 * 93 * @param appender The backup appender 94 * @since 1.2 95 */ 96 void setBackupAppender(Appender appender); 97 } 98 | http://logging.apache.org/log4j/2.x/log4j-1.2-api/xref/org/apache/log4j/spi/ErrorHandler.html | CC-MAIN-2016-07 | refinedweb | 411 | 56.86 |
direct calculation solution in Clear category for Square Spiral by Sim0000
from math import sqrt
# calculate the coordinate of n
def coord(n):
if n == 1: return (0, 0)
r = int(sqrt(n - 1) - 1) // 2 + 1
g, d = divmod(n - (2*r-1)**2 - 1, 2*r)
return [(-r+d+1, r), (r, r-d-1), (r-d-1, -r), (-r, -r+d+1)][g]
def find_distance(first, second):
x1, y1 = coord(first)
x2, y2 = coord(second)
return abs(x2 - x1) + abs(y2 - y1)
# At first, we determine ring which include n
# ring 0 : 1
# ring 1 : 2,3,...,9
# ring 2 : 10,11,...,25
# ring r : (2*r-1)**2+1,...,(2*r+1)**2
# Using following formula, we can calculate r from n.
# r = int((sqrt(n - 1) - 1) / 2) + 1
# Ring r have 8*r elements and start position is (-r+1, r).
# And another interesting position is follows.
# (-r, r) : left upper corner, n = (2*r-1)**2 + 8*r = (2*r+1)**2
# ( r, r) : right upper corner, n = (2*r-1)**2 + 2*r
# ( r, -r) : right lower corner, n = (2*r-1)**2 + 4*r
# (-r, -r) : left lower corner, n = (2*r-1)**2 + 6*r
#
# Second, I divide ring into 4 groups corresponding to the direction.
# Each group size is 2*r. The group 0 is the first 2*r elements of the ring
# and its direction is right, and so on.
# group 0 (dir = R) : n is from (2*r-1)**2 +1 to (2*r-1)**2+2*r
# group 1 (dir = D) : n is from (2*r-1)**2+2*r+1 to (2*r-1)**2+4*r
# group 2 (dir = L) : n is from (2*r-1)**2+4*r+1 to (2*r-1)**2+6*r
# group 3 (dir = U) : n is from (2*r-1)**2+6*r+1 to (2*r-1)**2+8*r
# Using following formula, we can calculate group number g from n, r.
# g = int((n - (2*r-1)**2 - 1) / (2*r)
#
# Finally, using above information, we will calulate the coordinate of n.
# When n belongs to group 0 of ring r, then the coordinate of n is
# (-r+1 + d, r), where d means n is the d-th elements of the group.
# As well, we can calculate for another groups.
# group 0 : (-r+1+d, r)
# group 1 : (r, r-1+d)
# group 2 : (r-1-d, r)
# group 3 : (-r, -r+d+1)
May 28, 2014
Forum
Price
Global Activity
ClassRoom Manager
Leaderboard
Coding games
Python programming for beginners | https://py.checkio.org/mission/strange-curcuit/publications/Sim0000/python-3/direct-calculation/share/d5b6ab32369bfb01379915bd2aaa1ebe/ | CC-MAIN-2020-45 | refinedweb | 436 | 65.15 |
Hi, I'm using Flash CS5 and actionscript 3 and going through a "bouncing ball" tutorial from a book that I bought.
The ball is supposed to bounce around a window indefinately.
The example works but instead of the ball keeping within the boundaries of the window it floats off the screen never to return.
Can someone please have a look at the code below and tell me what I can add to make this work as described?
Thank you very much.
package {
//Import necessary classes
import flash.display.MovieClip;
import flash.events.Event;
import flash.geom.ColorTransform;
import flash.geom.Rectangle;
public class Ball extends MovieClip {
//Horizontal speed and direction
public var speedX:int = 10;
//Vertical speed and direction
public var speedY:int = -10;
//Constructor
public function Ball() {
addEventListener(Event.ENTER_FRAME, onEnterFrame);
//Colors the ball a random color
var colorTransform:ColorTransform = new ColorTransform();
colorTransform.color = Math.random()*0xffffff;
transform.colorTransform = colorTransform;
}
//called every frame
private function onEnterFrame(event:Event):void{
//Move ball by appropriate amount
x += speedX;
y += speedY;
//Get boundary
var bounds:Rectangle = getBounds(parent);
//Reverse
//of stage.
if (bounds.left < 0 || bounds.right > stage.stageWidth) {
speedX *= -1;
}
}
}
}
It looks like you do not have anything to keep the ball within the boundaries for the y side of the story, only the x. You need to have something for speedY just like you do for speedX where you check the bounds.top and bounds.bottom...
if (bounds.top < 0 || bounds.bottom > stage.stageHeight) {
speedY *= -1;
} | https://forums.adobe.com/message/4348999 | CC-MAIN-2018-13 | refinedweb | 247 | 59.3 |
0
Here is the problem:
a function that takes two strings and removes from the first string any character that appears in the second string. eg. if the first string is “IamLearningPython” and the second string is “aeiou” the result is “mLrnngPythn”.
I've written the code for this, but I can not figure out the error:
Can any one tell me what is the error with this code:
def attenuate(sequence1, sequence2): newSeq = [] n = len(sequence2) for m in range(len(sequence1)): print(m) while(n in range(len(sequence2)) and sequence1[m] != sequence2[n]): '''checks for the condn of matched string if no match found then append the string to the list newSeq ''' if n == len(sequence2): entry = sequence1[m] newSeq.append(entry) n =+1 return newSeq def main(): return 0 if __name__ == '__main__': inp1 = str(input('please give your input: ')) '''IamLearningPython''' seq1 = list(inp1) inp2 = str(input('please give next input: ')) '''aeiou''' seq2 = list(inp2) newSeq1 = attenuate(seq1,seq2) print(newSeq1) main() | https://www.daniweb.com/programming/software-development/threads/415731/selecting-the-elements-from-one-list-and-input-them-in-another-list | CC-MAIN-2016-50 | refinedweb | 165 | 51.31 |
Mock objects are useful in unit testing as stand ins for other objects or functions. You might use a mock object instead of the real thing when: the real thing is expensive to create, the real thing requires online resources that might be offline, or you just want to do really fine grained testing. With mock objects you can easily control what they do and then test whether they were used as intended.
There are a number of Python mock libraries but the one discussed here is
mock:.
import mock
mock_func = mock.Mock() mock_func.return_value = 42 print mock_func(6, 9)
42
Mock objects remember how they have been called and you can test that they were called correctly.
mock_func.assert_called_with(6, 9)
If the calling sequences don't match you get an assertion error.
mock_func.assert_called_with(6, 7)
--------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-4-c5e431097d00> in <module>() ----> 1 mock_func.assert_called_with(6, 7) : mock(6, 7) Actual call: mock(6, 9)
assert_called_with applies only to the most recent call.
mock_class = mock.NonCallableMock() mock_class.some_method.return_value = 42 print mock_class.some_method(6, 9)
42
mock_class.some_method.assert_called_once_with(6, 9)
mock_func_w_side_effect = mock.Mock() mock_func_w_side_effect.side_effect = ValueError('Wrong!') mock_func_w_side_effect()
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-7-c8ad7335dff3> in <module>() 1 mock_func_w_side_effect = mock.Mock() 2 mock_func_w_side_effect.side_effect = ValueError('Wrong!') ----> 3 mock_func_w_side_effect() /Users/mrdavis/py-lib/mock-1.0b1-py2.7.egg/mock.pyc in __call__(_mock_self, *args, **kwargs) 942 # in the signature 943 _mock_self._mock_check_sig(*args, **kwargs) --> 944 return _mock_self._mock_call(*args, **kwargs) 945 946 /Users/mrdavis/py-lib/mock-1.0b1-py2.7.egg/mock.pyc in _mock_call(_mock_self, *args, **kwargs) 997 if effect is not None: 998 if _is_exception(effect): --> 999 raise effect 1000 1001 if not _callable(effect): ValueError: Wrong!
Another side effect is a function that actually does something, but I couldn't think of many uses for this.
mock_func_w_side_effect.side_effect = lambda x, y: x + y mock_func_w_side_effect('spam', 'SPAM')
'spamSPAM'
Creating mock objects directly as in the above examples can be useful for constructing objects passed to code under test but you may also want to replace functions and objects used by the code under test. Since you don't have direct access to these you can use mock's
patch utility, which comes in several flavors.
As an example I'll create a toy function to test. It simply calls
json.dumps. (Read more about the json module here:.)
import json def func_with_json(d): return json.dumps(d)
d = {'a': 1, 'b': [2, 3]} # a simple input for func_with_json
mock.patch can be used as a context manager. Here it replaces the function
json.dumps. At the end of the code block within the context manager
json.dumps goes back to its normal state.
with mock.patch('json.dumps') as mock_dumps: mock_dumps.return_value = 'JSON' r = func_with_json(d) assert r == 'JSON' mock_dumps.assert_called_once_with(d)
Outside the context block
json.dumps works as normal:
print json.dumps(d)
{"a": 1, "b": [2, 3]}
mock.patch can also be used as a function or class decorator, replacing an object inside the function or class.
Here we use
mock.patch to replace
json.dumps within a test function. The mock object replacing
json.dumps is passed to the test function as an argument.
@mock.patch('json.dumps') def test_func_with_json(mock_dumps): mock_dumps.return_value = 'JSON' r = func_with_json({'c': {'d': [4]}}) assert r == 'JSON' mock_dumps.assert_called_once_with(d) # whoops, we didn't pass in d, this should fail. test_func_with_json()
--------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-13-0c4563c6a92d> in <module>() 5 assert r == 'JSON' 6 mock_dumps.assert_called_once_with(d) # whoops, we didn't pass in d, this should fail. ----> 7 test_func_with_json() /Users/mrdavis/py-lib/mock-1.0b1-py2.7.egg/mock.pyc in patched(*args, **keywargs) 1188 1189 args += tuple(extra_args) -> 1190 return func(*args, **keywargs) 1191 except: 1192 if (patching not in entered_patchers and <ipython-input-13-0c4563c6a92d> in test_func_with_json(mock_dumps) 4 r = func_with_json({'c': {'d': [4]}}) 5 assert r == 'JSON' ----> 6 mock_dumps.assert_called_once_with(d) # whoops, we didn't pass in d, this should fail. 7 test_func_with_json() /Users/mrdavis/py-lib/mock-1.0b1-py2.7.egg/mock.pyc in assert_called_once_with(_mock_self, *args, **kwargs) 833 self.call_count) 834 raise AssertionError(msg) --> 835 return self.assert_called_with(*args, **kwargs) 836 837 : dumps({'a': 1, 'b': [2, 3]}) Actual call: dumps({'c': {'d': [4]}})
There are a number of different kinds of patches and different ways to use them. For more information refer to the mock documentation. | https://nbviewer.ipython.org/gist/jiffyclub/3701929 | CC-MAIN-2022-27 | refinedweb | 742 | 51.95 |
Opened 8 years ago
Closed 18 months ago
#999 closed bug (fixed)
Misattributed parse error in do block
Description
In the following program, there is clearly an extra closing parenthesis. However, the error message doesn't point to the actual problem:
main = do x <- 1) return () -- test.hs:1:10: The last statement in a 'do' construct must be an expression
The reported error is at the beginning of the statement "x <- 1", not at the closing parenthesis. When this first happened to me, I spent a lot of time puzzling over the beginning of the statement and the indentation of various lines before I discovered the real problem.
In this example, the error could be attributed to that unmatched parenthesis. Since 'do' blocks can legitimately be enclosed in parentheses, it might be more usable if the error message could identify the locations of both the statement and the end of the 'do' block -- this would point out both potential problem locations.
Change History (8)
comment:1 Changed 8 years ago by simonmar
- Difficulty changed from Unknown to Moderate (1 day)
- Milestone set to _|_
comment:2 Changed 8 years ago by simonmar
- Priority changed from normal to low
comment:3 Changed 6 years ago by simonmar
comment:4 Changed 6 years ago by simonmar
- Architecture changed from Unknown to Unknown/Multiple
comment:5 Changed 6 years ago by simonmar
- Operating System changed from Unknown to Unknown/Multiple
comment:6 Changed 5 years ago by simonmar
comment:7 Changed 5 years ago by simonmar
- Difficulty changed from Moderate (1 day) to Moderate (less than a day)
comment:8 Changed 18 months ago by morabbin
- Resolution set to fixed
- Status changed from new to closed
- Type of failure set to None/Unknown
Fixed in 7.6.1:
Orac:~/work/ghc $ cat > test.hs main = do x <- 1) return () Orac:~/work/ghc $ ghci test.hs GHCi, version 7.6.1: :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. [1 of 1] Compiling Main ( test.hs, interpreted ) test.hs:1:17: parse error on input `)' Failed, modules loaded: none. Prelude>
This happens because the closing bracket ends the layout for the do-block, via Haskell's parse-error layout convention. This causes the do-block to be checked, before getting back to parsing the rest of the file.
If you use -ferror-spans, then you get the position of the start and end of the statement.
To fix this we would probably have to postpone checking the syntax of a do-expression until the renamer. This isn't easy, because the HsDo constructor already has separate fields for statements and "body" (although I'm not sure why).
This bug is very closely related to #984. I'll leave it low prio for now. | https://ghc.haskell.org/trac/ghc/ticket/999 | CC-MAIN-2014-23 | refinedweb | 472 | 60.95 |
0
First I like to thank everyone that helped me with my work...
Please bear with me I don't know how to explain my problem.
Trying to get the random number to remain the same through out the length of the program. When the player guess the right number the player clicks on "Y" and the number should change. Cant get the number to change inside the loop. Outside the loop it changes every time a guess is made. I had a couple of while loops under the initial one but I opted for the if and elif.
import random def main(): answer = 'y' while answer == 'y' or answer == 'Y': number = random.randint(1, 1000) print 'I have a number between 1 and 1000' print 'Can you guess the right number' print number guess = input('Enter your number') if number > guess: print "Too low. Guess again" guess elif number < guess: print "Too high. Guess again" guess elif number == guess: print 'You won!!!' answer = raw_input("Do you want to play again? (Enter y for yes): ") main() | https://www.daniweb.com/programming/software-development/threads/174405/help-with-import-random | CC-MAIN-2017-43 | refinedweb | 176 | 83.46 |
FileUpload and Download
FileUpload and Download Hello sir/madam,
I need a simple code for File upload and Download in jsp using sql server,that uploaded file should be store in database with its content and also while downloading it should a file at another system in lan using jsp
uploading a file at another system in lan using jsp Thanks for the code at "... to save or upload the file at another system in lan at location "http
uploading a file - JSP-Interview Questions
uploading a file uploading a file and storing in database using...;
public File currentDir;
public DropTarget target;
JButton addButton...,BorderLayout.SOUTH);
currentDir = new File(System.getProperty("user.dir"));
}
public
Uploading a Profile with image[file] into a sql database
Uploading a Profile with image[file] into a sql database I need to upload a file along with the some text into a database ......[ Similar to http... , but upload file to database ] ... Pls help
File uploading - JSP-Servlet
File uploading i am using file uploading code for multiple file and aslo for single file but i am getting problem that No such file found....
file uploading - JSP-Servlet
file uploading Hi, thanks a lot for your kind answer. Now my program... problem. Im not geeting the full output for the program. Even, the file... to solve the problem as well. Thanks in advance.
Input File
file uploading using jsp
file uploading using jsp below file uploading code has one error... = " + formDataLength);
//String file = new String(dataBytes);
//out.println("FileContents:" + file +"");
byte[] line = new byte[128];
if (totalBytesRead
uploading image in the form
/DisplayimageonJSPpageusingXML.shtml
To know about file uploading using Struts2 you may go through the link, given...uploading image in the form Hi All,
I am working to build a form... they logged into the screen.
Im using Orcale database(sql quries to check
FILE UPLOADING - JSP-Servlet
FILE UPLOADING Hi ,
I want Simple program for file upload using html and servlet
plese help me hi friend pls try this code
**********
try{
String type="";
String boundary="";
String sz
jsp with excel sheet data uploading into database
jsp with excel sheet data uploading into database how to upload data in excel sheet with jsp into oracle 10g database
Uploading a single file by using JSp
Uploading a single file by using JSp u have said about submit button..but in program u have not used submit button..and where file will be stored..where should we specify the output folder name..
Visit Here
Uploading an image into the table - JSP-Servlet
Uploading an image into the table how to upload an image... number and database name. Here machine name id localhost
and database name... to database. */
connection = DriverManager.getConnection(connectionURL, "root", "root
Uploading a Software and storing in the database
Uploading a Software and storing in the database I want to upload a software(may be of maximum 20mb) through JSP, and store it in the database.
The coding present in the site for uploading and storing in the database
video uploading using jsp
video uploading using jsp how to upload a videos in web page using jsp
Hi,
You can upload your video with the help of JSP file upload... of JSP file upload.
Thanks
uploading problem
uploading problem i use glassfish server..
using netbeans for jsp... about file into database lib.
i use navicat Mysql ...
i use this code...
<... application to upload any file and save it to database.
1)page.jsp
<%@ page
Uploading a single file by using JSp
Uploading a single file by using JSp u have said about submit button..but in program u have not used submit button..and where file will be stored...;b>Choose the file To Upload:</b></td>
<td><INPUT NAME
File uploading - Ajax
File uploading hi friends,
how to uploading the file by using "AJAX".Please send the complete source code for this application
where u want to store the file
Can u specify Problem
File Uploading Problem I have a file uploading code but it create... fileName=$('#myFile').val();
alert(fileName);
$.ajax({
type: 'POST...("------------------------------------------");
FileUpload fup=new FileUpload();
boolean isMultipart
Uploading image using jsp
Uploading image using jsp how to upload image using jsp. Already i tried, But that image file does not read.
It returns only -1 without reading that image file ...
I want know that solution using by u...
Thanks,
P.S.N.
FileUpload and Download
FileUpload and Download Hello Sir/Madam,
I have used the below coding for Upload and download file, but it is not stored in database and also it s not download the file with its content... just it download doc with 0 Bytes
code for jsp - Ajax
country.By using jsp and Ajax.
Hello Friend
I send the code you...; Hi friend
sorry the jsp page for another program
look this file...code for jsp please give code for using jsp page and Ajax.Retrive
Image uploading
.
The code related to uploading an image file to oracle database using java servlet...-Fileupload library classes
DiskFileItemFactory factory = new...)) {
System.out.println("sorry. No file uploaded");
return
uploading a text file into a database
uploading a text file into a database how to upload a text file into a database using jchooser
import java.io.*;
import java.sql.... {
static File file;
public static void main(String[] args) throws Exception
Uploading multiple files in JSP - JSP-Servlet
Uploading multiple files in JSP Hi,
I have this code in JSP for Uploading multiple files :
Samples : Simple Upload.../jsp/file_upload/uploadingMultipleFiles.shtml
Hope that it will be helpful
Uploading Multiple Files Using Jsp
Uploading Multiple Files Using Jsp
...*. These are the classes which
have been provided to us by the apache to help us in file uploading.... If there is a
request for uploading the file then it makes a object of DiskFileItemFactory
class
Jsp and ajax
Jsp and ajax Hi
I am using jsp and ajax.I am retrieving the entire content from database(al.jsp) and
put it into a textbox.When i am clicking a combo box,it updates the selected value in the database(jag.jsp
upload to database - JSP-Servlet
to upload a pdf file into database(sqlserver2000) using jsp. In roseindia some examples i seen that is only for uploading into the server but i need the uploaded file into database whenever
i want that uploaded pdf file i have to retrieve
file uploading
file uploading How to upload music files onto the server and save the file information to the mysql database fetch and play the music files on a display page without download thus streaming the files
Pass hidden veritable while uploading file - JSP-Servlet
are uploading file, all the parameter other then file upload are null. Is it write... fields while uploading file on server?"
Thanks in advance...Pass hidden veritable while uploading file Hi All,
Is there any
image uploading perminssion in server dear friend...
following is my uploading code and i want to save some records....
exception
org.apache.jasper.JasperException: Exception in JSP: and AJAX- very urgent - Ajax
JSP and AJAX- very urgent Respected Sir/Madam,
I am..." button, I have to get a table from the database where all the names starting... (Using AJAX. for ur reference, I have included the coding below:
Login.html
excel uploading in jsp
excel uploading in jsp could you provide the source code for:
1)have to upload an empty excel sheet at client side i.e if client clicks an excel... given and printing them in a jsp page.
could you please provide the code in spring
jsp-file
jsp-file i want to upload a file by browse button and the file should be save in ms access database.....how i can implement trough jsp plz help me sir
query related to uploading file
to save the uploading time and date in database.
please help me it is urgent...query related to uploading file hello friends
i am doing my project in servlet and i want to upload a file and store file in local hard drive
jsp and ajax
jsp and ajax how to enable or disable textbox using radio buttons by using jsp and ajax
jsp & ajax
jsp & ajax how to enable or disable textbox using radio buttons by using jsp and ajax?
plz help me.... i m new in jsp & ajax
Struts file uploading - Struts
Struts file uploading Hi all,
My application I am uploading... and append it in the database.
Ultimately all the file content should be stored in the database without breaking and user should be able to download the file
creating new file with File class - JSP-Interview Questions
creating new file with File class I have a file uploading concept in my project. For that, while creating a new file is it necessary to give... of FileOutputStream class. If we give the path as from the file uploading location will it work
To insert attachment file in database in JSP.
To insert attachment file in database in JSP. I am doing project in JSP. How to insert attachment file in mysql database? Please suggest some... code:
1)page.jsp:
<HTML>
<HEAD><TITLE>Display file upload form
ajax in java - JSP-Servlet
ajax in java The below code is for a php page but I want this in JSP...:
"login.php"
Hi friend,
Do some changes to convert in JSP...?username=" + username + "&password=" +password ;
2.Save the file "login.jsp
Uploading Single File by Using JSP
Uploading Single File by Using JSP
... to understand how you can upload a file by using the Jsp.
As Jsp is mainly used... whenever it will get a request for file uploading. To
upload the file
Combo Box Using Ajax In JSP
to
Select the Data from database using Ajax in combo box. We created two file... Combo Box Using Ajax In JSP
... id data come
from database corresponding this id and auto fill the Emp Id
Ajax using jsp
Ajax using jsp <%@ page import="java.io.*" %>
<%@ page...);
}
%>
Is there Any error...........In first Page I use ajax for displaying the data from database and edit that data...that edit data is used in the above
Ajax with jsp - Ajax
Ajax with jsp multiple combo boxes with ajax in jsp? Hi friend,
I am sending you a link. I hope that, this link will help you.
Please visit for more information.
ajax jsp - Ajax
ajax jsp multiple combo with ajax using jsp? Hi friend,
I am sending you a link. This link will help you. Please visit for more information.
Thanks
Selection With Ajax and JSP
Selection With Ajax and JSP I am working at a jsp page using ajax... the city.
I am doing it through two jsp pages only.
Country1.jsp :
<%@ page...);
}
catch(Exception e){
System.out.println(e);
}
%>
The database
DropDown in ajax+jsp
DropDown in ajax+jsp I have four dropdown if i select first dd then only corresponding values must be there in 2nd dd,same with 3 and 4... used 3 database tables:
1)country
CREATE TABLE `country
JSP Ajax Form
Ajax Form Example
This example explains you how to develop a form that populates dynamically
from the database though Ajax call.
When user selects employee id from the combo box then the corresponding
employee details are populated
File location in FileOutputStream - JSP-Servlet
File location in FileOutputStream Hai,
For uploading a file i used the FileOutputStream() method. And uploading works perfectly. This method... be used to store file object in a database column of data type "File". I need
Uploading and download pdf or .txt file.
Uploading and download pdf or .txt file. I want admin user to upload pdf file into database and the users can download those pdf format from database
Popup Window using Ajax In JSP
Popup Window using Ajax In JSP
... Window application
using Ajax in JSP. For this, we will create the following...;DeleteRow.jsp" file for deleting the row
from database using JDBC database
Java JSP - Ajax
form database
public ArrayList loadSearchListsFormDatabase
Ajax
Ajax how to impliment ajax in registration table using jsp-servlet
file upload and insert name into database - JSP-Servlet
file upload and insert name into database Hi, I just joined as a fresher and was given task to upload a file and insert its name into database... HIread for more information,
JSF-fileupload-ajax - Development process
JSF-fileupload-ajax
for the above code ,
iam able to bind the contractname & contractNo
but i am unable to bind the upload property in the bean
it is giving null value
Fileupload in servlet
Fileupload in servlet If we upload a file using servlet can it possible to put the uploaded file in any locationof the syatem(in D drive or in C...;nbsp;</td></tr>
<tr><td><b>Choose the file To Upload
fileupload - JDBC
fileupload HI I want help,I wrote the code for file upload inserting the file when retriving that file getting error please reply if any body knows
Uploading Single File by Using JSP
JSP
JSP Hi ,
I am working in JSP. In my project i have to generate my entire database records to pdf,excel,csv format , so which concept i have to use... the Eclipse IDE. This will help you in generating the report xml file.
Thanks
passing file parameter through ajax - Ajax
passing file parameter through ajax I have file parameter in jsp file, i need to pass it to server side through ajax. how i can i do that. Hi friend,
file1.jsp
passing file using jsp
uploding an file - JSP-Interview Questions
uploding an file i want a code for uploading a file and storing it in clog and blog using jsp
ex:uuploading file using binary input steam
Ajax how to include ajax in jsp page?
Hi,
Please read Ajax First Example - Print Date and Time example.
Instead of using PHP you can write your code in JSP.
Thanks | http://www.roseindia.net/tutorialhelp/comment/95299 | CC-MAIN-2013-48 | refinedweb | 2,324 | 63.8 |
More Videos
Streaming is available in most browsers,
and in the WWDC app. ongoing work with emerging network technologies. 1
- Build Better Apps with CloudKit Dashboard
- Writing Energy Efficient Apps
- Your Apps and Evolving Network Security Standards
Tech Talks
WWDC 2016
WWDC 2015
- Download
Jeff Tu: Good afternoon, everyone.
I'd like to welcome to you part two to Advances in Networking, a continuation of the session from the past hour. My name is Jeff Tu, and I'll be taking you through the first topic. In this session we'll discuss new URLSession developer API and enhancements, networking best practices, and other important technology areas in networking.
Our first topic is new URLSession API. But before that, I'd like to review the underlying API we'll be talking about, which is URLSession.
URLSession is an easy-to-use API for networking introduced in iOS 7 and OS X Mavericks.
URLSession supports networking protocols like HTTP/2; HTTP/1.1; FTP; and custom streams with an overall emphasis on URL loading. If you provide it an HTTPS URL, it also automatically provides the encryption and decryption of data between you and the web server.
Last year we deprecated NSURLConnection API. So we encourage any new app development to occur with URLSession. For more information on URLSession, I encourage you to review past WWDC sessions and other online documentation.
Recall that there are different kinds of URLSession objects that you can create.
The basic object you can create is a default configuration URLSession object.
Default sessions have a behavior where a task either fetches a URL immediately; or if the device can't connect to the web server, fails immediately. URL loads can fail because the device isn't connected to the Internet or if the server you're trying to reach happens to be down. Those are just a couple of examples.
Background URLSession objects, on the other hand, don't have this immediate fetch or fail behavior but are scheduled out of process and continually monitored for network connectivity to the server. There are more examples of a URLSession task failing because of bad connectivity. You might have no Internet connection, you might be in a theatre with your device in Airplane Mode or it should be in Airplane Mode.
Perhaps you have a session object where you've disallowed cell usage but the user only has cell connectivity. Or the server might only be are polling every set period of time or depending on the user to tap or drag to refresh the UI. The problem is that these approaches add complexity to your apps and aren't always effective.
SCNetworkReachability only tells you that you might be able to reach the server, not that you will. You, our developers, have been asking for an easier solution.
Wouldn't it be easier to say, then, "Please fetch me this resource when the network is available"? We're happy to tell you about a new feature that lets you do this. We call this the URLSession Adaptable Connectivity API.
This -- This API is available now on all platforms. By opting into this API, you tell URLSession that in the event that the task would fail because of a lack of connectivity that it should wait for a connection to the server instead of failing. How do you opt in? There's a new boolean property called waitsForConnectivity. Set this to true, and then you get the new behavior. I'd like to repeat what this property does.
You go from the default behavior of load it now or fail now if I can't connect to load it now, but if I can't and would have failed because of a lack of connectivity, try again when I get a real chance to talk to the server. The API also waits when it encounters DNS failures as well since one network's DNS service might fail to resolve but another one may not. Please note that this boolean is a no op for background sessions, as background URLSession objects get this behavior automatically. We'll tell you later in this hour more about the differences. You may be wondering, "Can my code get a notification if it's in this waiting state?" You might want to have the app present other behavior while it's waiting to connect. For example, having an offline browsing mode or a mode that operates when the user is only on cell. If you would like to know when your app is in this waiting state, you can optionally implement the URLSession taskIsWaitingForConnectivity delegate method. Note that this delegate method is only called if you've opted into the waitsForConnectivity property with a true value.
If you've done this, the delegate method itself will be called only one time or not at all if the task never had to wait. We recommend that your apps always opt into the waitsForConnectivity property. This is because even when you opt in, the task will still try to run immediately. The task will only wait if it can't connect to the server. There are rare exceptions to opting into the property, though. For example, if you had a URLSession task whose purpose was to buy stock at market price, you'd want that to run now or fail now and not wait until you had an Internet connection. I'd also like to mention that when you opt into waitsForConnectivity, the timeout interval for request timer starts only after you've connected to the server. Timeout interval for resource, however, is always respected. Let's summarize how we would use the API and then go through a code example. The main thing is to opt into the waitsForConnectivity property. You would create and resume the URLSessionTask as before.
If the device can't connect to the server, we'd call a delegate callback if you implemented it and only once. All other URLSession methods are still called same as before.
Remember, though, that this API only has an effect for non-background sessions.
Let's go through a sample code. First create a session configuration object and make one for default session type. Opt into the waitsForConnectivity property.
Create the session object and set the URL you want to load. Use the session object to create a task object. And finally, resume the task to get it started.
Even with adaptable connectivity, your request may still fail for other reasons.
For example, you could connect to the server, but a new data center employee might unplug a server, cause the network connection to drop, and all your apps on your phone might disappear. Or your device connects to the server and sends an HTTP request, but there's so much traffic that the request times out.
For situations like these, we'd like you to consult online resources that go into more detail on what you can do. Retrying network loads in a tight loop, though, is almost always a bad idea. You asked for a better way to load network resources.
Better than polling for network connectivity to the server and better than reachability API that won't guarantee a connection to the server. Let URLSession do the work for you. Opt into the waitsForConnectivity Adaptable Connectivity API. If you opt in, the request will still run immediately with no performance penalty and only wait if you can't connect to the server.
Once it can connect, your URLSession task behaves just like it did before.
Continuing our theme of what's new, I'd like to pass the mic to my colleague, Jeff Jenkins.
Jeff Jenkins: Thanks, Jeff. Well, good afternoon.
Hope you guys are having a great WWDC. And I'm exciting to be here and thrilled to talk to you a little bit more about some enhancements we've made to the URLSessionTask API. Now, first I want to spend a little bit of time talking about background URLSession. We haven't talked a whole lot about it, so let me give you a little bit of background on that. The background session URLSession API allows your application to perform networking even if your process, your application isn't running. We monitor the system conditions, CPU, battery, all sorts of things to really find of right time to do your networking tasks.
Now, of course, if you implement various delegate methods, we're going to wake up your app and call those delegate callbacks so that you can handle that information.
And, of course, we're going to make sure your app is running when your task completes so that you can then process that data. Now, one of the great use cases for background URLSession is taking advantage of another feature on the system, which is background app refresh.
Now, what this really does is allows your application to have the most current, the freshest data, right? There's nothing more frustrating than pulling your device out, launching an app, and the first thing you're greeted with is some sort of spinner, right? You're waiting for this application to start pulling down data.
You want that data right away. You want to be able to get that data to your user so your user is excited and happy to use your app. Background app refresh is a way to do this. It's a way to tell the system that, "Hey, in the future I want to be able to be launched so that I can refresh my data so I have the most important information," maybe stock information, or weather forecast, other important things that your app does. Now, this applies to applications, as well as watchOS complications. And if you want to learn a little bit more in depth about background app refresh, you can go back to 2013 WWDC, as well as last year's 2016 WWDC and look at these sessions for more details. So let's look at background app refresh in action; what is it really doing? And to do that, we kind of need to look at the state of your application. We're interested in three states here: A running state, suspended state, or a background state. Now, with your app running, you're going to opt into background app refresh. You're going to say, "In the future, run my app, make sure my app runs so that I can get the latest information." And then your process could be suspended. And in the future your process is now running, your app is now going to be able to ask for new data.
And like good developers, this app is using URLSession API. In fact, it uses a background URLSession. It creates a URLSession task and schedules this task to run and grab the data that your application needs. Now, your process could go away at this point, but then at some point URLSession is going to run your task and it's going to run it to completion hopefully if everything goes well. And you're going to get the data. So we're going to background launch your application and allow you to process that completed task and process that data that we've fetched for you.
And then at some point the user is going to launch your app, it's going to come foreground, and boom, they've got the freshest data there. So this is great.
But we looked at this flow and said, "Hmm, maybe there's something we can do to help our developers improve their applications on our platforms." And we think we can do something for you. The first problem that we want to solve is we noticed there's an extra background launch that had to happen just for you to create the URLSession task.
And as we all know, anytime your process is launched, what does that do? It impacts battery life, requires CPU burden. So that's not necessarily great for the device if we're doing extraneous work, and we really don't need to be doing that.
The other problem we'd like to solve are stale network requests, right? You're asking URLSession to do work. And at some point in the future, that work is going to complete. Well, what happens between when you ask for the work to be done and when it actually got done? Maybe there was some change in context and that original request doesn't make sense anymore. So we need to give you an opportunity to really, if there's a context change, let us know about that and get rid of these stale network requests. Because there's nothing worse than getting data and going, "I can't do anything with it," and throw it away. And the last problem we think we could help you with is helping us know how to best schedule your URLSession tasks.
When is the most optimal, best time in the system to be able to run your task so that we can get your data in the most efficient way for you to display that so that your users are excited and delighted by that data? Let's look at what we did. We're introducing the URLSessionTask scheduling API. Now, this is available across all of our platforms.
It's available in the beta builds that you have received here at WWDC.
And we encourage you to take a deep look at this. Now, what we've done first is we provided a new property. This is a property on URLSessionTask object.
It is called earliestBeginDate. And what you're going to do here is provide a date to us in the future when you want your task to be eligible for running. I use that word eligible because it's important. It doesn't mean that this is the point in time when your task will run, it will do networking; it's just telling the system, "I would like my task to be eligible so that it can run." And we're still bound by system policies as to when we can make the networking happen for this task. It's only applicable to background URLSessions and tasks built off of background URLSession.
Let's take a look at how this property in conjunction with other existing properties really allows you to do some fine-grain scheduling. So you'll create a URLSessionTask and, of course, you'll call resume on it so that we know that this task can now be put into the queue so that work can happen. You'll -- and at this point the task will be in a waiting state. We're waiting for the earliestBeginDate to happen.
And as soon as that is hit, that task becomes eligible for running.
Now, you can use the existing timeoutIntervalForResource to really control how long your app is willing to wait for at that resource to get loaded, right? You might set some amount of time to say, "After this point in time, this resource isn't interesting to me anymore." And that interval of time covers from resume to when that timeout happens based on the value you place in timeoutIntervalForResource. Now, I want to go back to the original background app refresh workflow that we looked at earlier.
Right? We noticed there was a couple of background launches that occurred.
But with this new API we're able to get rid of one of those. So the way that your app will work is while your running, you're going to create a URLSessionTask; you'll opt into our new scheduling API by setting an earliestBeginDate; then your process can go to sleep. We're going to complete the work for you.
And when that work is available, we're going to background launch you that one time and allow you to process the resulting data. And then when the user brings your app to the foreground, boom, it's got the freshest, most current data. And we've been able to solve that one problem of that additional background app launch.
And so it's better performing on the system, and we think that's great.
So that's problem number one solved. Let's look at problem number two, the stale network fetches. We want to give you an opportunity to alter future requests. So you might have given us a request, but the context might change. We've introduced a new delegate callback on a URLSessionTaskDelegate titled willBeginDelayedRequest. With this delegate, you'll be able to be called at the moment when your task is about to start networking.
So it is you've told this that the task is eligible and the system now has decided yes, this is the right time to do the networking. We're going to call this delegate method, if you implement it, and allow you to make some decisions about this task. Now, this will only be called one, if you implement it; also, if you opt into the scheduling API by setting an earliestBeginDate.
And, again, this is only available on background URLSessions.
And as I mentioned, this is an optional delegate. And I want to take a second here to have you really think about this because it's important, this delegate method.
As with all delegate methods, they're all opt into. But this one will cause some interesting side effect that I'll show you in a minute. You really need to think about, "Can my application determine context, the viability of a request in the future?" Now, there's a completion handler that's passed to this delegate method.
And you need to give a disposition to URLSession. You need to tell us does the original request, does it still make sense? Go ahead and proceed.
Or maybe the context has changed enough and you need to make some modifications, maybe a different URL or maybe a header value's different and you want to go ahead and modify that request at this point in time right before the networking happens. Or you might make the decision this request is just useless at this point, cancel. We don't want to do stale requests. So now if we go back to this workflow and we go back to my comment about really thinking about this delegate method, you will see that we're kind of back to that original workflow where there's two background launches in order to satisfy this URL task. Right? But we have to stop and think about that.
What is more expensive, performing a stale network load or a mere application background launch? It is way more expensive to the system to do stale loads, get all this data, and then decide I don't need it and pitch it. Okay? So we want you to really think about this new delegate method and whether your application has the ability to really understand the viability of your requests in the future. Hopefully that make sense to you. Now, the third problem we want to solve is how do we schedule your request in a most optimal, most intelligent way in our system? There's some information that in URLSession we just don't know about.
So we're providing a little bit of change to our API to allow you to explain to us some information about your requests and also about your responses. We're giving you two properties, the first one is countOfBytes ClientExpectsToSend, and the second one is countOfBytes ClientExpectsToReceive. We think you know more about your requests.
Maybe you have a stream body that you want to attach to a request, we don't know about that. You probably know the size of that.
We don't know about your servers and the size of data your servers' shipping back.
We believe you have some insight to that. And that will give us hints as how we can in a most optimal, intelligent way schedule your tasks.
If you don't know, well, then you can always specify NSURLSessionTransferSizeUnknown.
So that solves the third problem. Let's take a look at how this new API works in code. It's very easy to use. First thing we're going to do is create a URLSession background configuration. We're then going to create a session based on that configuration. Once we have that, we're going to now generate a URLRequest, specify the URL we want to go to, maybe set a header value, something that makes sense for your task. Again, this is just an example.
And now we're going to create a task that encapsulates that request on that session.
And we're going to opt into the new scheduling API by setting the earliestBeginDate property and give us a date. In this example we say two hours from now I want this task to be eligible to be run. And I'm also going to give some hints to URLSession and say, "This is a small request, there's no body, I've just set one header, maybe 80 bytes." And then my server probably is going to send about a 2K response to this.
And with all URLSession tasks, make sure you call resume. Now, how does the new delegate work? Well, we decided I know context. I can make some intelligent decisions about my networking tasks in the future. So I've implemented willBeginDelayedRequest. So in our example here what I've decided to do is to modify the request. I'm going to take the original request, create a new updatedRequest. I'm going to maybe change a value in the header that makes more sense now that this task is actually going to do some networking.
Time has passed, I have new information. I put that information on that task. And then I'm going to call the completionHandler and use a disposition of useNewRequest and passed it that new request. If you take a look at our header file, you can see other dispositions available to you in this completionHandler call.
So let me recap the scheduling API that we're introducing here. Background URLSession is an awesome API for doing networking that allows your application to not even be running and have this networking happen for you. Our new scheduling API will allow you to delay your requests so that they can, you know, obtain and pull down the freshest information for your application. And it's really we give you an opportunity to alter those things based on the context and the time at when the networking is actually going to happen.
The other part of this API change is to allow you to give hints to us so that we can be super intelligent and make these tasks run at the most optimal time on these devices.
Now, I'd like to turn the time over to Stuart Cheshire, an Apple distinguished engineer.
And thank you for your time. Stuart Cheshire: Thank you, Jeff. Now we're going to talk about enhancements in URLSession.
We have four things to cover, let's move through them. Often you want to show a progress bar to indicate to the users how progress is being made.
And right now this is a little bit cumbersome. There are four variables that you need to monitor with Key-value Observing. And sometimes the countOfBytesExpectedToReceive or Send is not always available. The good news now in iOS 11 is URLSessionTask has adopted the ProgressReporting protocol. You can get a progress object from the URLSessionTask, and that gives you a variable fractionCompleted, which is a number in the range zero to one. You can also provide strings to give more detail about what the operation is. You can attach that progress object to a UIProgressView or an NSProgressIndicator to get an automatic progress bar. You can also combine multiple progress objects into a parent progress object when you're performing multiple tasks, such as downloading a file, decompressing a file, and then handling the data.
So that makes your progress reporting much simpler. The binding between a URLSessionTask and the progress object is bidirectional. So if you suspend a URLSessionTask, that is the same as pausing the progress object. If you pause the progress object, that is the same as suspending the URLSessionTask. We now have support for the Brotli compression algorithm. In tests this compresses about 15% better than gzip, which results in faster network access. Like other new compression schemes, this is only used over encrypted connections to avoid confusing middle boxes that might not recognize this compression. Because Safari uses URLSession, that's also means Safari gets the benefit of this new Brotli compression algorithm.
And many major websites have already announced support for Brotli in their web service.
Our next topic is the Public Suffix List. The Public Suffix List is sometimes called the effective top-level domain list. And this is important for determining where administrative boundaries occur in the namespace of the Internet.
One thing we don't want to allow is for a website to set a cookie on the com domain, which is then accessible to any other dot com company. So you might be tempted to make a rule that you can't set cookies on top level domains, only on second level and lower.
But domains are named differently in different parts of the world.
In America, Apple.com and FileMaker.com are different companies.
But in Australia many, many companies are under com.au, and that doesn't make them all the same company. So the Public Suffix List is a file of rules and patterns that tells software how to judge where administrative boundaries occur.
This is used for partitioning cookies, and it's used by the URLSession APIs.
And if you use the HTTPCookieStorage APIs directly, it's supported there, too.
We used to update this in software updates, but now with the more rapid progress in creating top level domains, we've changed to doing this over the air.
We could push a new list every two weeks if we wanted to. URLSessionStreamTask is the API you would use if you just want a byte stream. If you're not doing HTTP Style Gets but say you want to write a mail client, URLSessionStreamTask gives you a simple byte stream. It supports upgrading to TLS with the STARTTLS option.
If you have existing code that is written using the old NSInputStream and NSOutputStream APIs, you can extract those objects from a URLSessionStreamTask to use your old code. But for any new code you're writing, we strongly recommend that you use the new native URLSessionStreamTask APIs. We announced this a couple of years ago at WWDC 2015. What we have new for you now is automatic navigation of authenticating proxies. If the proxy requires credentials, then we will automatically extract those from the key chain or promptly the user on your behalf.
So we've covered the enhancements for URLSession, let's move on.
Thank you. Tips and hints that we've learned from our years helping developers. Number one rule: Don't use BSD Sockets. And by the same token, we encourage you not to embed libraries that are based on BSD Sockets. Because we do lots of work, as you've been hearing today, to provide benefits to your applications.
We provide Wi-Fi Assist so that your application succeeds instead of failing when Wi-Fi isn't working. We provide techniques to minimize CPU use and minimize battery use to give users longer battery life. We have the ability to do tasks in the background when your application isn't even running. And the third-party libraries just can't do anything when they're not in memory running. And final bit of advice: Always try to use connect-by-name APIs as opposed to APIs where you resolve a name to an IP address and then connect to the address. We talked earlier about the requirement for IPv6 support.
And the reason that almost all of your apps worked perfectly is because when you use connect-by-name APIs, you don't get involved with the IP addresses.
And if you're not involved with the IP address, you don't need to care whether it's v4 or v6, it just works. Another question we often get is about the timeout values.
So I want to recap that. The timeoutIntervalForResource is the time limit for fetching the entire resource. By default, this is seven days. If the entire resource has not been fetched by that time, it will fail. timeoutIntervalForRequest is a timer that only starts once the transfer starts. Once it starts, if your transfer stalls and ceases making progress for that time-out value, that when that timer will fire. We have seen developers that take their old NSURLConnection code and convert it to the new URLSession code by mechanically making a URLSession for every old NSURLConnection they used to have. This is very inefficient and wasteful. For almost all of your apps what you want to have is just one URLSession, which can then have as many tasks as you want. The only time you would want more than one URLSession is when you have groups of different operations that have radically different requirements. And in that case you might create two different configuration objects and create two different URLSessions using those two configuration objects.
One example is private browsing in Safari where each private browsing window is its own separate URLSession so that it doesn't share cookies and other states with the other sessions.
Most apps can just have one statically-allocated URLSession, and that's fine.
But if you do allocate URLSessions dynamically, remember to clean up afterwards.
Either finish tasks and invalidate or invalidate and cancel.
But if you don't clean up, you'll leak memory. We get developers asking us about convenience methods and delegate callbacks. Delegate callbacks give you detailed step-by-step progress information on the state of your task.
The convenience methods, like the name suggests, are a quick and easy way of using the API.
With convenience methods you don't get all the intermediate delegate callbacks, you just get the final result reported to the completionHandler. Don't mix and match both on the same URLSession, pick one style and be consistent. If you're using the completionHandler, you will not get the delegate callbacks with two exceptions.
If networking is not currently available and the task is waiting for connectivity, you'll be notified of that in case you want to show some indication in your UI.
The other delegate method you may get notified is the didReceive AuthenticationChallenge. So here's a summary of the options available to you.
Doing URLSessionTasks in your process with waits for connectivity as we recommend, the task will start immediately if it can; or if it can't, it will start at the first possible opportunity. You also have the option of doing tasks in the background.
And you can do background discretionary tasks, which will wait until the best time in terms of battery power and Wi-Fi networking. Now I have a couple of ongoing developments to talk about. I'm sure many people in this room have heard about TLS 1.3.
TLS, Transport Layer Security is the protocol that encrypts your data on the network to prevent eavesdroppers from seeing it and perhaps as importantly to make sure that you have connected to the server you intended to connect to. TLS 1.2 is very old at this stage. It has a number of problems that have been discovered.
And TLS 1.3 is almost finished. That standard is not quite finalized.
Apple is participating in that IETF working group, and we expect that to be finished by the end of this year. In the meantime, we do have a draft implementation if you want to experiment with it right now. And if you check out the security session from this Apple Developer Conference, you can learn how to experiment with that.
Another thing you may have heard of is QUIC. QUIC is a new transport protocol designed to experiment with new ideas that are hard to do with TCP.
QUIC started out as an experiment by some Google engineers, and it was a very successful experiment. They learned a lot. Some ideas were good, some turned out not to work as well as they hoped. Those engineers have taken those lessons they learned to the IETF. We have formed a new working group to develop the IETF standard QUIC protocol. Apple is also participating in that working group. That is not nearly as far long as TLS is, but that is also making good progress. Before we finish, one other thing we should talk about, Bonjour. Fifteen year ago at this very convention center in San Jose, Steve Jobs announced Bonjour to the world. And I got the opportunity to tell you all how it worked. A lot has happened since then. Since we launched it in 2004, we brought out Bonjour for Windows, for Linux. We had Java APIs.
The next year Mac OS X 10.4 introduced wide-area Bonjour to complement the local multicast-based Bonjour that was in the original Mac OS 10.2 launch.
The same year the Linux community came out with a completely independent GPL-licensed implementation of Bonjour called Avahi. A couple of years after that, Apple shipped Back to My Mac, which is built on the wide-area Bonjour capabilities introduced in 10.4. And in 2009 we brought out the Bonjour Sleep Proxy, which let you get Back to Your Mac at Home, even when it was asleep to save power.
In the years since then, Android adopted with Bonjour with their own native APIs in 2012.
That was in API Level 16 for those of you paying attention. And a couple of years ago, Windows 10 added their own native Bonjour support. Now, I know a lot of people in this room are well aware of the history. We know about the major OS vendors adopting Bonjour. But something else happened that surprised even me: Bonjour started showing up in a lot of other places. And I want to illustrate this with just a little personal anecdote. I recently bought a new house.
And as part of the process of buying a new house, you often end up buying a bunch of new stuff. And I started adding things to my home and connecting things to the network. And I started finding a bunch of stuff showing up in Bonjour.
Now, I bought a new printer; it had Bonjour. I bought some Access Network security cameras, they had Bonjour. That didn't surprise me because we know printers and network cameras were among the first devices to adopt Bonjour.
But then I got a surround sound amplifier and it had Wi-Fi, and it had an embedded web server with Bonjour. Now, you can set up the amplifier with the TV and the remote control, but naming the inputs with up, down, left, right on the remote control one character at a time is really tedious. Being able to do this on my laptop or on my 27-inch iMac with a keyboard and a mouse is such a nicer way to set up a new piece of equipment. I bought another amplifier from different company, it also had Bonjour. I got solar panels on the roof of the house to save on the electricity bill, the inverter has Wi-Fi with an embedded web server advertised with Bonjour.
So now with one click, I can see a graph of how much power I've produced in the day.
My most recent purchase was an irrigation controller to control the sprinklers that water my lawn. It has Wi-Fi with an embedded web server advertised with Bonjour. Compared to trying to program your garden sprinkles with a two-digital LCD display and the plus minus buttons, this is such a glorious experience to see it all on my big iMac screen at the same time. So thank you to all you device makers who are making these wonderful products.
For the app developers in the room, how does this affect you? The IETF DNS Service Discovery Working Group continues to make progress. We have new enhancements to do serve discovery on enterprise networks where multicast is not efficient and on new mesh network technologies that like Thread that don't support multicast well.
The good news for app developers is this is all completely transparent to your apps.
The APIs haven't changed because we anticipated these things even 15 years ago.
The only thing to remember is when you do a browse call and you get back a name, type, and domain, pay attention to all three. You may be used to see the domain always being local, but now it may not be local. So when you call resolve, make sure to pass the name, type, and domain you got from the browse call.
And for the device makers out there, don't forget to support link-local addressing.
Link-local addressing is the most reliable way to get to a device on the local network because if you can't configure it, you can't misconfigure it. So to wrap up, in part one we talked about ongoing progress in ECN.
It's now supported in clients and servers, the stage is set. Any ISP can now see an immediate benefit for their customers by turning on ECN at the key bottleneck links.
Continue testing your apps on NAT64. Mostly known use there, we're very happy everything is going smoothly. We have a move to user space networking, which also doesn't change the APIs. But you may notice when you're debugging and looking at stack traces, you may see symbols in the stack trace you're not used to. You may see differences in CPU usage.
We wanted to you to be aware of that so it didn't surprise you. We have new capabilities in the network extension framework. And the big news, we have multipath TCP as used by Siri now available for your apps to use as well. Thank you.
In part two, we covered some enhancements in URLSession, especially the waitsForConnectivity, which is really networking APIs done the way they always should have done. When you ask us to do something, we should just do it, not bother you with silly error messages that it can't be done right now.
You ask us, we will do it when we can. I gave some tips about best practices and news about ongoing developments. You can get more information about this session on the web. We have some other sessions we recommend you hear that you'll probably find interesting. Thank you.
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again. | https://developer.apple.com/videos/play/wwdc2017/709 | CC-MAIN-2020-05 | refinedweb | 6,597 | 72.26 |
.
Check your Python version. It must be at least version 2.6 (Python version 3.x is supported as well). To check your python version:
shell> python -V python 2.6.63222e0f096e80000> Getting value for key demo_key...
ValueResult<RC=0x0, Key=demo_key, Value=u'demo_value', CAS=0x3222e0f096e80000,
Flags=0x0> Creating new key demo_key with value 'new_value' This will fail as
'demo_key' already exists <Key=u'demo_key', RC=0xC[Key exists (with a different
CAS value)], Operational Error, Results=1, C Source=(src/multiresult.c,147)>
Replacing existing key demo_key with new value... result Getting new value for
key demo_key... ValueResult<RC=0x0, Key=demo_key, Value=u'new value',
CAS=0xbff8f2f096e80000, Flags=0x0> Deleting key demo_key...
OperationResult<RC=0x0, Key=demo_key, CAS=0xc0f8f2f096e80000> Getting value for
key demo_key. This will fail as it has been deleted <Key=u'demo_key', RC=0xD[No
such key], Operational Error, Results=1, C Source=(src/multiresult.c,147)>
Creating new key demo_key with value 'added_value'... OperationResult<RC=0x0,
Key=demo_key, CAS=0x366a05f196e80000> Getting the new value...
ValueResult<RC=0x0, Key=demo_key, Value=u'added_value', CAS=0x366a05f196e80000,
Flags=0x0> that
'beersample-python' #...>start. Select. import Couchbase from couchbase.exceptions import KeyExistsError, NotFoundError from couchbase.views.iterator import RowProcessor from couchbase.views.params import UNSPEC, Query
Then, we want to set some constants for our application:
DATABASE = 'beer-sample' HOST = 'localhost'.
Then, define a function to give us a database connection:
def connect_db(): return Couchbase.connect( bucket=app.config['DATABASE'], host=app.config['HOST']). Java Beer-Sample App"> <meta name="author" content="Couchbase, Inc. 2013"> "> {%
raisew an exception (Flask) layer to return the results.
Before we implement the Python-level.
The tutorial presents an easy approach to start a web application with Couchbase Server as the underlying data source. If you want to dig a little bit deeper, the full source code in the couchbaselabs repository on GitHub has more code to learn from. This code might be extended and updated from time to time. 325, in get return
_Base.get(self, key, ttl, quiet) File "/usr/lib/python2.7/json/__init__.py",
line 326,), Unfortunately, the SDK has no
way to pre-emptively determine whether the existing value is a string, and the
server does not enforce this..
This covers advanced topics and builds on the Using the APIs section. can write a custom transcoder that allows Zlib compression. Here’s a snippet:
import zlib from couchbase.transcoder import Transcoder from couchbase import FMT_MASK # (format & FMT_ZLIB): value = zlib.decompress(value) format &=_all() operation as part
of their test scripts. Be aware of the scripts run by your testing tools and
avoid triggering these test cases/operations unless you are certain they are
being performed on your sample/test database.
Inadvertent use of
flush_all() on production databases, or other data stores
you intend to use will result in permanent loss of data. Moreover the operation
as applied to a large data store will take many hours to remove persisted. | http://docs.couchbase.com/couchbase-sdk-python-1.0/ | CC-MAIN-2016-26 | refinedweb | 492 | 51.85 |
Issues
ZF-3995: .
Posted by Matthew Weier O'Phinney (matthew) on 2008-08-22T14:54:57.000+0000
This is definitely a documentation issue, and I'm scheduling to fix it for RC3.
Posted by Thomas Weidner (thomas) on 2008-08-24T14:58:29.000+0000
Assigned right component
Posted by Matthew Weier O'Phinney (matthew) on 2008-08-24T15:47:00.000+0000
Fixed in trunk and 1.6 release branch
Posted by Wil Sinclair (wil) on 2008-09-02T10:39:20.000+0000
Updating for the 1.6.0 release.
Posted by Ota Mares (ota) on 2008-09-12T04:05:30.000+0000
The method still makes no sense in the final 1.6.0 release.
Posted by Ota Mares (ota) on 2008-09-12T04:10:09.000+0000
Reopened because the method still makes no sense in the final 1.6.0 release. The description of the bugreport still applys.
First of what the hell is $context? Where does it come from? And why should it have input and id keys? And as reported the value of the $value parameter will be overwritten by the $context parameter input key entry.
Posted by Matthew Weier O'Phinney (matthew) on 2008-09-12T05:47:25.000+0000
Validators only need a value, but can also take an optional $context parameter; typically, this will be the set of values being validated, such as $_POST or $_GET. In Zend_Form, we pass the entire set of values being validated in the form to the $context parameter.
$context is used to provide, well, context to the validator. In the case of a captcha, there are usually multiple values in the dataset that are used to identify and validate it: the "id" field is used so that Zend_Captcha knows which session namespace to look for the token in, and the "input" field is the actual user input that is being tested.
While the logic may make no sense to you, it makes sense to those who have developed it, and, more importantly, it simply works.
Closing the ticket again. Please do not re-open.
Posted by Ota Mares (ota) on 2008-09-12T06:02:29.000+0000
Sorry but are you kidding me? There are people who do not use Zend_Form at all.
Did you have ever looked at the method? You have to provide a context parameter, else the method tells you that it is missing the input or id key and the validation fails. So when you have NO context it is not possible to validate the input.
Besides that why do you have to provide the first parameter $value if it gets overwritten in any case by the value of the context parameter, see line 331 of Zend_Captcha_Word.
So, please make the method usable without the use of Zend_Form and its Zend_Captcha Element.
Posted by Benjamin Eberlei (beberlei) on 2008-09-12T06:06:54.000+0000
in line 330 the content of $value is always overwritten by the context. you cant do anything about it :-)
Posted by Benjamin Eberlei (beberlei) on 2008-09-12T06:11:07.000+0000
additionally $context is a mandatory parameter, if its not set the function returns false, line 326 to 329.
Posted by Matthew Weier O'Phinney (matthew) on 2008-09-12T10:33:47.000+0000
I think I understand the issue.
The solution is to assume the value provided is an array, and contains both id and input elements within; that way, $context is not necessary.
Scheduling for next mini release (which, due to code freeze for 1.6.1, means 1.6.2).
Please note: this is NOT a show stopper. You can simply pass the context array when not using Zend_Form.
Posted by Ota Mares (ota) on 2008-09-12T11:14:04.000+0000
{quote}The solution is to assume the value provided is an array, and contains both id and input elements within; that way, $context is not necessary.{quote} Why not simply check if the $context is null and skip the checks because they are not needed when not using Zend_Form? Beside that why not even remove these checks completly and move them to Zend_Form.
{quote}Please note: this is NOT a show stopper. You can simply pass the context array when not using Zend_Form.{quote} How is this not a "showstopper"? Its nowhere documentated and it says nowhere how that array should be nested with what elements. Besides that the method looks unlogical in the first moment when you do not know about the senseless relation to Zend_Form.
Passing the context array to the method is in no way logic. I guess normal user will fall into dispair when trying to use Zend_Image_Captcha.
Posted by Matthew Weier O'Phinney (matthew) on 2008-11-24T09:29:39.000+0000
isValid() updated in r12803 in trunk and r12805 in 1.7 release branch. | http://framework.zend.com/issues/browse/ZF-3995?focusedCommentId=23601&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2014-49 | refinedweb | 811 | 75.81 |
Google Sign in with XServer, iOS + Swift
Published on Jan 22, 2021
Learn how to build a screen where you can sign in with Google on iOS, in Swift language.
Create OAuth Credentials
Sign in to your Google Cloud Console and click the + CREATE CREDENTIALS button. Select the OAuth Client ID option.
On the next page:
- Select the iOS Application type
- Give a name to your OAuth client
- Paste the Bundle Identifier of your app - the one in the General tab in Xcode
- Click the CREATE button
You will get a pop with the just created
OAuth client ID. Ignore it, close the popup and click on your OAuth client's row to enter its details.
Keep this page open and switch to Xcode.
Configure the Xcode project
Assuming you already have created a new Xcode project, a Database on xserver.app, and included the XServer's SDK files in your project - as explained in this section of the Documentation - you have to create a ViewController in the Storyboard with a button that will allow users to sign in with Google.
Something like this:
Connect that button to your
.swift file and call its IBAction as
googleButton.
Download the Google Sign In framework here and drag these 3 files inside your project - in the left panel which hosts the list of files:
Now enter the
XServerSDK.swiftfile and add this variable right below the imports:
let GOOGLE_SIGN_IN_CLIENT_ID = ""
Go back to your Google Cloud Console page, copy the Client ID and paste it between the
"" of the above variable:
let GOOGLE_SIGN_IN_CLIENT_ID = "191234567890-iparcjjt2reg3kqsri85vgk.apps.googleusercontent.com"
Lastly:
- Copy the iOS URL scheme string form your Google Cloud Console
- Open the Info tab in Xcode.
- Expand the URL Types section.
- Paste the copied URL scheme into the URL schemes field.
Make it happen
Now, let's code!
In your
.swiftfile, start by importing these frameworks:
import AuthenticationServices import GoogleSignIn
Add the
GIDSignInDelegate to your class declaration:
class ViewController: UIViewController, GIDSignInDelegate { ... }
Right below it, declare the following variable:
var googleSignIn = GIDSignIn.sharedInstance()
Inside the
googleButton()IBAction function, paste this code:
googleSignIn?.presentingViewController = self googleSignIn?.clientID = GOOGLE_SIGN_IN_CLIENT_ID googleSignIn?.delegate = self googleSignIn?.signIn()
Use this function to make the magic and lat users sign in with Google:
func sign(_ signIn: GIDSignIn!, didSignInFor user: GIDGoogleUser!, withError error: Error!) { guard let user = user else { print("Google login cancelled.") return } // Get user's data let password = user.userID ?? "" // let authToken = user.authentication.idToken ?? "" let firstName = user.profile.givenName ?? "" let lastName = user.profile.familyName ?? "" let email = user.profile.email ?? password + "@google.com" let profilePicURL = user.profile.imageURL(withDimension: 150)?.absoluteString ?? DATABASE_PATH + "assets/img/default_avatar.png" // If the Google user doesn't have a profile photo, the XServer's default one will be used let username = firstName.lowercased() + lastName.lowercased() // Call the Sign Up function from the XServer SDK self.XSSignUp(username: username, password: password, email: email, signInWith: "google") { (results, e) in if error == nil { let resultsArr = results!.components(separatedBy: "-") let uID = resultsArr[0] let isSignInWithAppleGoogle = resultsArr[1] // Go back if isSignInWithAppleGoogle == "true" { print("User previously signed up, so now it's signed back in!") } else { DispatchQueue.main.asyncAfter(deadline: .now() + 1, execute: { // Add additional data let params = [ self.param(columnName: "tableName", value: "Users"), self.param(columnName: "ID_id", value: uID), self.param(columnName: "FL_profilePic", value: profilePicURL), ] self.XSObject(params) { (e, obj) in if e == nil { print("Google user just signed up!") // error } else { DispatchQueue.main.async { print(e!) }}}// ./ XSObject }) } // ./ If } else { DispatchQueue.main.async { print(e!) } }}// ./ XSSignUp() } func sign(_ signIn: GIDSignIn!, didDisconnectWith user: GIDGoogleUser!, withError error: Error!) { // Call your backend server to delete user's info after they requested to delete their account }
Remember to create a Column of type File and name it as
profilePicin your database, so the SDK will store the Google profile photo in your database after launching the app with the above code!
Done, now you and other users can sign up or sign in to your app with their Google account.
If you want more info about the XServer backend, just visit the official website and read the Documentation. | https://xscoder.hashnode.dev/google-sign-in-with-xserver-ios-swift?guid=none&deviceId=a72eb644-b056-4306-9f73-b33ebb99633b | CC-MAIN-2021-17 | refinedweb | 677 | 58.89 |
Part:
Task 1: got 60 bytes of poetry from 127.0.0.1:10000 Task 2: got 10 bytes of poetry from 127.0.0.1:10001 Task 3: got 10 bytes of poetry from 127.0.0.1:10002 Task 1: got 30 bytes of poetry from 127.0.0.1:10000 Task.
Note::
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.sock.connect(address) self.sock.setblocking(0)
Eventually we’ll get to a level of abstraction where we aren’t working with sockets at all, but for now we still need to. After creating the network connection, a
PoetrySocket passes itself to the
reactor via the
addReader method:
# tell the Twisted reactor to monitor this socket for reading from twisted.internet import reactor.
A quick note on terminology: with
zope.interface we say that a class implements an interface and instances of that class provide the interface (assuming it is the instances upon which we invoke the methods defined by the interface). We will try to stick to that terminology in our discussion.
Skip down the twisted.internet.interfaces source code until you come to the definition of the
addReader method. It is declared in the
IReactorFDSet Interface and should look something like this:
def addReader(reader): """ I add reader to the set of file descriptors to get read events for. @param reader: An L{IReadDescriptor} provider that will be checked for read events until it is removed from the reactor with L{removeReader}. @return: C{None}. """
IReactorFDSet is one of the Interfaces that Twisted reactors provide..
Note 1: Technically,
IReactorFDSet would only be provided by reactors that support waiting on file descriptors. As far as I know, that currently includes all available reactors.
Note 2: It is possible to use Interfaces for more than documentation. The
zope.interface module allows you to explicitly declare that a class implements one or more interfaces, and comes with mechanisms to examine these declarations at run-time. Also supported is the concept of adaptation, the ability to dynamically provide a given interface for an object that might not support that interface directly. But we’re not going to delve into these more advanced use cases.
Note:
class IReadDescriptor(IFileDescriptor): def doRead(): """ Some data is available for reading on your descriptor. """
And you will find an implementation of
doRead on our
PoetrySocket class. provide a given Interface. This allows us to pass a set of related callbacks (the methods defined by the Interface) with a single argument. It also lets the callbacks communicate with each other through shared state stored on the object.
So what other callbacks are provided on
PoetrySocket objects? Notice that
IReadDescriptor is a sub-class of
IFileDescriptor. That means any object that provides
IReadDescriptor must also provide
IFileDescriptor. And if you do some more scrolling, you will find:
class IFileDescriptor(ILoggingContext): """ A file descriptor. """ def fileno(): ... def connectionLost(reason): ...
I left out the docstrings above, but the purpose of these callbacks is fairly clear from the names:
fileno should return the file descriptor we want to monitor, and
connectionLost is called when the connection is closed. And you can see our
PoetrySocket objects provide those methods as well.
Finally,
IFileDescriptor inherits from
ILoggingContext. I won’t bother to show it here, but that’s why we need to include the
logPrefix callback. You can find the details in the
interfaces module.
Note::
- The broken client doesn’t bother to make the socket non-blocking.
- The
doReadcallback just keeps reading bytes (and possibly blocking) until the socket is closed.
Now try running the broken client like
- Fix the client so that a failure to connect to a server does not crash the program.
- Use
callLaterto make the client timeout if a poem hasn’t finished after a given interval. Read about the return value of
callLaterso you can cancel the timeout if the poem finishes on time.
78 thoughts on “An Introduction to Asynchronous Programming and Twisted”
Hello!
Thanks for great tutorial!
I just don’t get how to solve your second suggested exercise. Would you please explain how to do it? Why should I call my function with callLater? I’d like to call it now and if it hasn’t finished in some period of time, then it should be canceled (like socket timeout). But callLater will call my function later and not now.
Thanks!
Hey Petr, you’re right. Just calling callLater on the function itself won’t work. Here’s the idea:
1. Invoke your operation as you normally would. In this case that means creating the PoetrySocket objects.
2. Invoke callLater on another function whose job it is to cancel the first one, if the first one hasn’t already finished. What the second function actually does is going to be context-specific. For a PoetrySocket object, that probably means unregistering itself from the reactor and closing the raw socket. Does that make sense?
Dave, thanks so much for clarification. Looks like callLater with possibility of canceling is a powerful tool!
Yes, canceling asynchronous operations is often convenient. With the release of version 10.1.0, Twisted added some features for doing that. I’ll be discussing them in Part 19.
This is a great article on Interfaces, the links to it are pretty helpful and I did all suggested exercises!
Thanks for another good article on Twisted.
Hi Dave and thanks for the great tutorial on Twisted!
I think there is a problem in twisted-client-1 / get-poetry.py, doRead lines 89-95:
bytes = ”
while True:
try:
bytes += self.sock.recv(1024)
if not bytes:
break
This will loop forever since bytes will contain the first 1024 bytes of the poem
at the first iteration of the while loop and bytes is not empty after that.
Maybe this is better, it reads the poem with 4 iterations of the while loop.
bytesread = self.sock.recv(1024)
if bytesread:
print “in doRead:while bytesread:%s, length:%d” % (bytesread, len(bytesread))
bytes += bytesread
if not bytesread:
break
Hey Coruja, did you actually run the client?
The client works as is because we set the socket to non-blocking mode,
and eventually we get an exception because there are no more bytes to read and then we break out of the loop. And it takes a
lot more
recv()calls to get a poem from the slow server because the bytes only come in a few at a time. The 1024
is an upper limit on the number of bytes to read, not necessarily how many you actually get.
Hi Coruja, I think I might have accidentally deleted one of your messages that got marked as spam. I was cleaning out my wordpress spam queue and thought I saw your name there. But my finger was already pressing the delete key
Hi Dave,
I did not post more messages after the previous one.
Of course i have run the client twisted-client-1/get-poetry.py on both the slowpoetry.py and the fastpoetry.py and this client hangs forever for me. I can’t show you any output since there are no print statements within the while loop, I can just contemplate my CPU usage toping at 100%. I am using Twisted 10.0.0 on Ubuntu 10.04
Coruja, my apologies, you are absolutely right. I’ll fix the code right now, and thank you for correcting me
great catch , as long as the delay on the server side is not 0, client will run into an infinite loop
Hi Dave,
Once again Windows throws and error with parsed address in the client. Line 40 in the twisted client needs to be changed from
if ‘:’ not in addr:
host = ”
to
if ‘:’ not in addr:
host = ‘127.0.0.1’
I suspect that this problem is universal for all the code using the parse_args() function.
Dave,
Just been through my copy of the code with find and replace, and now have a version of all clients which should work. Would you like me to send this to you and save yourself five mins, and if how?
Hi Thomas, I would appreciate that. Do you have ‘git’ installed, are you able to generate a patch?
That would be best for me. But if not, you can just send me a zip file with your code.
thanks,
dave
Unfortunately I don’t have git. Using windows restricts my ability to use git quite a lot. Where should I send the zip file???
I have use a ported version of diff to produce a patch file using unified diff.
Patch can be found in my dropbox here:
Thanks, I’ll apply as soon as I can.
Applied!
Just want to quickly point out the the LISP link is currently broken. Correct link is
Just a small matter of incorrect case.
Fixed! I think either case actually works. My URL had an extra ‘http://’ prefix (it’s an artifact of the wordpress link editor UI that often gets me).
Woops
the first thing I noticed was the case difference and so I assumed that, despite knowing that wikipedia is case insensitive. One day I’ll slow down and think more
The blocking client can be fixed fairly easily. The only reason it “blocked” and read in each poem in its entirety is because of the “while True:” loop in the doRead.
If the doRead function was changed to the following then it works:
def doRead(self):
poem = ”
bytes = self.sock.recv(1024)
msg = ‘Task %d: got %d bytes of poetry from %s’
print msg % (self.task_num, len(bytes), self.format_addr())
if not bytes:
return main.CONNECTION_DONE
else:
self.poem += bytes
I thought this may cause problems if the data received was larger than 1024 but I made my blocking server send the whole poem (using ecstacy) and the doRead was called until all the data was received/read in.
I don’t know if there is a benefit either way using blocking or non-blocking sockets in this case. The select/reactor takes care of guaranteeing that there will be data available when the doRead is called so there will be no wait either way.
There may be a benefit of removing the while loop from the non-blocking case also. If there was lots of data coming from several sources the loop could cause data to be continually read from one source until the “flood” of data stopped/slowed. With the loop removed then each source gets one read before moving to the next, this allows all sources to perform their reads even if one is being overwhelmed (I am assuming a fair servicing of each source by the reactor).
Hey Erik, that’s a good point about the while loop. Twisted’s select loop means that doRead is only ever called
when there is at least some data to read so the first call to recv() should never block. Using setblocking(0) is
probably a good idea to be safe, though, as it might work differently on other platforms.
Limiting the amount of data to read in one ‘gulp’ to prevent starving other sockets is also a good idea, and is how
Twisted’s actual socket code works.
Phew, that last one was tough! Again, excellent guidance. 😀
Thank you, sir!
Small point, but when reading stuff for the first time I’m very literal!
class PoetrySocket(object):
poem = ”
This is a class variable, which must be a mistake? When you later go:
self.poem += bytes
You’re assigning self.poem, not touching the previously defined class variable of the same name. (Although it uses that for the initialisation – it evaluates to self.poem = PoetrySocket.poem + bytes)
So probably the initialisation should be moved from class scope to __init__()
Hey Doug, I guess it’s debatable style, but using class variables to provide fixed
defaults for instance variables is not an uncommon practice.
You will find the same pattern in the Twisted source code itself. You are, of course,
free to avoid doing so yourself
Hey – thanks for the quick reply!
Um.. yeh I’ve just noticed it’s everywhere. I personally don’t like it – it’s misleading, it duplicates names into the class, and you have to look in 2 places to see what is ‘supposed’ to be an instance initialisation. But at least I know to expect it
Thanks again. Enjoying the series!
Especially something like this in twisted-client-2/get-poetry.py
class PoetryClientFactory(ClientFactory):
task_num = 1
protocol = PoetryProtocol # tell base class what proto to build
One of them is a true class variable, and is used by the superclass. The other is meant to be an instance default. Anyway – all good.
—————————–
I tried the eric’s solution of removing the outer ‘while’ loop, but change the receiving bytes to 4. Below is a snippet of the modified code:
————–snippet of func: doRead in get-poetry.py (modified) ——————————————————-
bytes = ”
# while True:
try:
bytesread = self.sock.recv(4) # changes to 4
# if not bytesread:
# break
# else:
bytes += bytesread
except socket.error, e:
if e.args[0] == errno.EWOULDBLOCK:
# break
pass
return main.CONNECTION_LOST
—————————————————————————–
Then I run the code, It results good.
(‘python twisted-client-1/get-poetry.py 10000 10001 10002′ ) (Servers are the same as this article(part4) mentioned)
——————————– snippet of output ————————
Task 1: got 2 bytes of poetry from 127.0.0.1:10000
Task 1: got 4 bytes of poetry from 127.0.0.1:10000 # get 1st 4 B
Task 1: got 4 bytes of poetry from 127.0.0.1:10000 # get 2nd 4B
Task 1: got 4 bytes of poetry from 127.0.0.1:10000 # … 3rd
Task 1: got 4 bytes of poetry from 127.0.0.1:10000 # 4th
Task 1: got 4 bytes of poetry from 127.0.0.1:10000 # 5th
Task 1: got 4 bytes of poetry from 127.0.0.1:10000 # 6th
Task 1: got 4 bytes of poetry from 127.0.0.1:10000 # 7th
Task 1: got 2 bytes of poetry from 127.0.0.1:10000 # get last 2B. 4B * 7 + 2B = 30B (server;10000 sends 30B a time) 3 bytes of poetry from 127.0.0.1:10000
Task 1 finished
Task 1: 3003 bytes of poetry
Task 2: 615 bytes of poetry
Task 3: 653 bytes of poetry
Got 3 poems in 0:00:10.126736
———————————————————————-
My conclusion:
server:10000 sends 30 bytes per time, and server:10001 and server:10002 send 10 bytes per time, they all send more than receiver of getting 4 bytes per time.
The only explanation I can think of is: once there is data staying on the buffer to read(no matter the data is fresh or stale), the reactor will call our doRead function until it consumes the entire buffer. ( note: 1. the ‘stale data’ here, I mean, is the data in the buffer which is not yet touched by the receiver because receiving speed is less than the sending speed from server. 2. We have already removed while loop from the doRead function, so the guarantee of consuming data in ‘read buffer’ is relied on reactor).
for example(‘*’ means a byte in buffer, ‘|’ means the reactor calls the doRead function to receive data.
client’s buffer: 30 bytes received from server
client gets 4B from buffer every time which is invoked by reactor)
|****|****|****|****|****|****|****|**
. 4 . 4 . 4 . 4 . 4 . 4 . 4 . 2
Am I right ?
Yes, I think you’ve got it.
Hi Dave,
I finished Exercise 1 and 2 (I think) and my solutions (relevant functions) are posted here:
Question: Why not use setTimeout on our sockets instead of using callLater?
Hey Lauren, it’s looking good! One thing, I had intended the ‘not crash’
portion of Part 1 to mean that it would still download from any servers
that were actually working, even if one (or more) were not.
To answer your second question, the
setTimeoutsocket method
only makes sense for blocking sockets. It in effect declares that you are
only willing to block while reading or writing to a socket for so long.
But in Twisted, or any other asynchronous I/O system, you never block on
sockets. In effect, the socket timeout is always zero. Blocking on a socket
would basically defeat the whole point of using asynchronous I/O which is
to only service the ports which are not going to block. So to do things like
timeouts on individual sockets, you need another mechanism.
By the wait,
callLaterdoes end up setting a timeout, but it’s
on the
select()(or
poll(), etc.) call, not on
individual sockets.
Ahh I see,
I had misunderstood the setTimeout function when I was reading the docs. Thanks for taking the time to look over my solution.
Excuse me if this is mentioned in the later sections but I am working through this tutorial as I speak. For a while I was unable to understand how, the twisted framework knows about the existence of of the Poetry class although it implemented the interface.
Then I saw this import which completed the puzzle.
“from twisted.internet import main
if __name__ == ‘__main__':
poetry_main()”
Adding a note here so that others who may not see the link get a clue.
Thanks for the great tutorials. I’m really learning a lot!
For exercise 1, I modified ‘init’ as follows:
def __init__(self, task_num, address):
self.task_num = task_num
self.address = address
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
if self.sock.connect_ex(address) == 0:
self.sock.setblocking(0)
reactor.addReader(self)
else:
print ‘Could not connect to Task’, task_num
It works with one or more of my servers off, but is it better to look for the error somewhere else?
Hey James, looks good to me!
Did you have a question about this pastebin?
Hey Dave,
now I am learning asynchronous programming!
First of all , thank you for such wonderful explanation of not so easy to understand topic.
The question is : why do we need now callLater, if connectionLost will react as it’s callback.
Please, explain
Glad you are enjoying the series!
You are using callLater to timeout the request before the other end hangs up.
(Imagine the other side is ‘stuck’ and is never going to close the connection).
Yeaaaaap, now i understood the reason. I ‘ve just thought for this particular program, not for something more bigger.
callLater solves the problem of stucking nice for ‘reactor’ pattern.
Hmmmm, could you recommend some articles OF USING COMET AND GEVENT, PLEASE?
I’m not very familiar with gevent.
I need some consultation Dave.
First of all, one twisted process uses one CPU (one application). If I have 8 CPUs, how can I use them all.
It means that 8 processes of twisted must collaborate with each other, but how ?
Hi Rustem, you have a few options. First, you can use threads in Twisted, see the deferredToThread API call. Of course, you still have to contend with Python’s global interpreter lock, so that only really makes sense if your threads are calling out to, say, C libs that release the GIL from Python code. But another option is to run multiple processes. One possibility is to have a master process that sends work to slaves. See the open source project Ampoule for an example, and possibility a library to use, for doing that.
Hi Dave,
Many thanks for excellent series of articles.
I have just noticed that you wrote “… you pass an object that must implement a given Interface …” at somewhere in your article -in part 4-. As i read from zope.interface documentation, objects does not implement an interface, they provides interface(s). It may not a big deal for those who are familiar with zope.interface concepts, but for newcomers like me it might be confusing.
Hi Senol, glad you like them. I guess I haven’t read the zope.interface docs
too closely, are you saying they specifically use the term ‘provides’ and
never ‘implements’?
Actually they use both terms. The term ‘implements’ is used for the class and the term ‘provides’ is used for objects (instance objects). They describe these two terms as in the following quotation from ZopeGuideInterfaces/Declaring interfaces
Ah, ok, I see what you mean. Ok, I will see about updating the text to be more consistent with their terminology.
I made some adjustments to the text, how does it look now?
Note, that there is an actual zope.interface function called ‘implements’
that says if a class implements an interface, by the way.
It seems to me that the “while” loop inside the PoetrySocket’s doRead method is unnecessary. I removed it, removed the break statements that assumed it, reordered some conditionals, and ran the twisted-client-1/get-poetry.py client with three slowpoetry.py servers. The result was a successful reading of several poems, as far as I can tell.
On second thought, perhaps the “while” loop would be a good thing if the socket were receiving message that were larger than its “recv” buffer. Then the loop would repeat until everything the server sent on that socket was read.
If the while loop is removed then there’s no difference if the server always sends less than the python socket object reads in a single socket.recv operation, but if the server sends a larger message then… hmmm…
Exactly, you’ve summarized the issue precisely. I include the while loop to illustrate the issue, and the basic way you go about handling it
if you want to read multiple times from the socket, but I don’t claim my solution is the definitive “right answer”.
Hi Dave, I need some help with the doread() method.
def doRead(self):
bytes = ”
while True:
try:
bytesread = self.sock.recv(1024)
if not bytesread:())
self.poem += bytes
I am clear with all the code within the while loop. But what coonfuses me is the next line.
If not bytes condition is true, it means bytes is still a NULL string. Then how does it make sense of having the task finished?
Hello Indradhanush, when
bytesis the empty string, it means that the connection
has been closed (usually by the server, since the client does not close it). And in our
poetry protocol, we have defined the close of the connection to mean the
end of the poem. In other protocols, of course, the fact that the connection has
closed may not mean you finished — maybe it was closed halfway through and you
just never got to the end. Using the connection close to indicate the end of a
poem makes our examples simpler, but it’s not really best practice.
In that case say the code got 10 bytes of poetry and the server shut down. How does bytes become empty after the while loop?
The bytes variable won’t become empty during that particular
invocation of
doRead. So the socket will stay
in the twisted reactor and during the next iteration, the
closure of the socket will cause the select loop to return
it as ready for reading. Then we will discover that the
socket has closed (bytes will be empty now) and we will
be done.
I think I am getting the picture. But still a little unclear in my head.
It takes a while, but you will get it. Remember that
doRead
will be repeatedly called by the reactor (like in the earlier example where
we explicitly used
select). Try putting in some more print statements
so you can see what is happening.
Hi Dave,
In the first exercise, I tried to use reactor.callWhenRunning() to do the socket connection. In this way the exception of connection error will be handled by the reactor. But it failed and my code looks like this:
def __init__(self, task_num, address):
self.task_num = task_num
self.address = address
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#self.sock.connect(address)
self.sock.setblocking(0)
from twisted.internet import reactor
# connect when the reactor start to let the reactor handle exceptions.
reactor.callWhenRunning(self.connect)
# tell the Twisted reactor to monitor this socket for reading
#reactor.addReader(self)
def connect(self):
self.sock.connect(self.address)
from twisted.internet import reactor
reactor.addReader(self)
And I got an exception:
socket.error: [Errno 115] Operation now in progress
Could you please tell me what is wrong with it? Thanks!
Hm, did you try calling
setblockingafter you connect?
Ah, that’s it, thanks a lot~
Hi Dave, thanks for an excellent tutorial.
Regarding the parent question, what if I want my connect not to block the reactor? I have a Bluetooth socket where I have to implement my own FileDescriptor and the connect can block for a while. My current solution is to do this via deferToThread which feels a bit like cheating…
I’m sorry if you cover this in a later chapter, only read the first 4 yet
Hello! Glad you liked the tutorial. Using a thread isn’t cheating at all and Twisted itself sometimes does that, see, for example
ThreadedResolver.
Hi Dave. This is a solution for excercise 2: (Please keep in mind I’m a self-taught amateur programmer and quite new at it):
First add ‘self.terminate = False’ to the beggining of the __init__() function, and add reactor.callLater(2, checkFin) at its end.
Define the checkFin() function as follows:
def checkFin(self):
self.terminate = true
Finally, at the pen-ultimate line of the doRead function add this:
if self.terminate == True:
sock.close()
print(‘Process timed out, poem retrieval aborted.’)
Hi Kareem, I don’t think that would work. What if doRead never gets called after two seconds (because no data showed up to be read)? I think you will need to do something else in checkFin.
You can also use the return object of the call later function as follows:
First add:
cancelCheckFin = reactor.callLater(2, checkFin)
to the end of the __init()__ function
and modify the doRead function as follows:
def doRead(self):
bytes = ”
while True:
try:
bytesread = self.sock.recv(1024)
if not bytesread:
************* cancelCheckFin.cancel()())
if self.terminate:
print(‘grace period over, poem transfer aborted)
self.sock.close()
if self.terminate == True:
sock.close()
print(‘Process timed out, poem retrieval aborted.’)
self.poem += bytes
That almost a solution for canceling the timeout, but I think you would need
self.cancelCheckFin = reactor.callLater(...)
Otherwise
cancelCheckFinis just a local variable in
__init__and the
doReadmethod will not have access to it.
Should probably put reactor.removeReader(self) after sock.close()
Thanks Dave. I’ll keep reading, this is all new to me (networking in general), hopefulley I’ll get the hang of it. I’m eventually going to try using twisted to make a game server for a simple 2-D game for my boys. Just for the fun of it. If I’m undrstanding all this correctly, the client side will have to be plain non-twisted asynchronous programming because I can’t relinquish control to the reactor on the client side, where the game loop will reside. Correct?
Hi Kareem! That’s a great project, game programming is a fun way to learn. If the client is in Python, then both client and server can certainly be in Twisted. As you point out, a game client typically has a ‘game loop’. This is simply a loop that waits for things to happen (the user clicks here, or presses this key, or a time tick happens and the state of the world needs to be updated, etc.). Well that’s all the reactor loop is doing, too, it’s just waiting for network events. So the reactor loop can be the game loop. Or the game loop can be the reactor loop. If you program the client with GTK or QT, there are adaptors that allow Twisted to use the GTK or QT event loops as the Twisted loop. There might be one for PyGame, too, I’m not sure.
Hi Dave,
I got a question when I read this:
‘Furthermore, if we have a long-running computational (CPU-bound) task, it’s up to us to split it up into smaller chunks so that I/O tasks can still make progress if possible’
I don’t understand what it means by I/O tasks.Do you mean functions like recv() that take data out from socket,or you mean the process of receiving the data from a server though a connection and putting it into a socket.
For the former cases:
I understand this.
If the callback is long-running computational task,of course,the ‘select’ loop is blocking when the callback is running.Thus other recv() cannot be called.
For the latter cases:
I think it should make progress even if a a long-running computational callback is running
And why we are suppose to split them into smaller pieces,it does’t speed up the overall performance,since it will not block on I/O ,
Hi Tommy, by I/O tasks I mean both reading and writing to a socket. The select statement is used to wait for both reading and writing to sockets because both of those actions may block. So a long-running function will block both readers and writers waiting for their turn in the select loop. Does that make sense? | http://krondo.com/?p=1445 | CC-MAIN-2015-27 | refinedweb | 4,903 | 65.83 |
Hey Everyone!
So I'm a super beginner trying to teach myself the ins and outs of stacks (ever so slowly). Specifically, I'm trying to work out a programming challenge in which the objective is to write a function to tell if a string in the format of word1$word2 satisfies the condition of word2 being the reverse of word 1.
I believe that I have a decent enough stack class (I haven't implemented the exceptions properly yet, but that shouldn't be an issue with the test case I'm trying currently), as well as a function to perform the operations on the strings. I'm including both of these files in a main.cpp file.
The issue I am having is that, when I got to compile this program, I am hit with an error saying "multiple definition of 'stringcomp(std::string)'". Seeing as how I only declare the function stringcomp(string) in the file where I define it, I don't see how this is possible.
Does anyone have any input as to what I did wrong in this case? (files included below) I am a beginner, so I'm sorry if it seems painfully obvious or stupid.
stackA.h:
#include <string> using namespace std; typedef char stacktype; class Stack{ private: int top; int size; stacktype *arr; public: Stack(int size) { if (size <= 0) { throw string("Invalid stack size."); } arr = new stacktype[size]; top = -1; }//end copy constructor void push(stacktype value) { if(top==size) { throw string("Stack storage overflow"); } top++; arr[top]=value; }//end push stacktype peek() { if(top == -1) { throw string("Stack empty."); } return arr[top]; }//end peek void pop() { if(top == -1) { throw string("Stack empty"); } top--; }//end pop bool isEmpty() { return (top == -1); }//end isEmpty int getTop() { return top; }//end getTop ~Stack() { delete[] arr; }//end destructor };//end stack class
stringrec.cpp (contains the stringcomp() function)
#include <iostream> #include <string> #include "stackA.h" using namespace std; void stringcomp(string str) { int i; bool inlang;//tells if the string is in the language Stack mystack(str.size()); while(str[i] != '$') { mystack.push(str[i]); i++; }//end while loop //increment i to skip $ i++; //match reverse of w as stipulated inlang = true; while(inlang && (i<str.size())) { // try // { mystack.pop(); if(mystack.getTop() == str[i]) { i++; } else { inlang = false; } //}//end try /* catch StackException { inlang=false; }//end catch */ }//end while if(inlang && mystack.isEmpty()) { cout << "The string is in the language." << endl; } else { cout << "The string is not in the language." << endl; } }//end function
main.cpp:
#include <iostream> #include <string> #include "stringrec.cpp" using namespace std; int main() { string mystring; mystring = "ABC$CBA"; stringcomp(mystring); return 0; } | https://www.daniweb.com/programming/software-development/threads/320236/string-manipulation-with-stacks | CC-MAIN-2017-39 | refinedweb | 442 | 61.16 |
27 August 2008 15:57 [Source: ICIS news]
LONDON (ICIS news)--Styrene buyers and sellers alike expressed dismay on Wednesday as they saw benzene spot values rising before both markets moved towards contract negotiations for September.
?xml:namespace>
“Styrene doesn’t need these kinds of ups and downs,” said a source at a major styrene producer. “It is a lousy situation, and we will try to defend our margins as best as possible. Styrene cannot take more huge margin losses.”
Spot benzene values had moved up on renewed buyer interest and reinvigorated crude oil values over the past week, changing market participants’ ideas on contract settlements from €100/tonne ($147/tonne) decreases to €20-30/tonne decreases.
Publicly settled styrene contracts have traditionally moved in line with benzene and buyers and sellers alike were concerned at the sudden upturn.
Styrene producers were bullish on their inability to give up any more margin, while buyers said that they were still hoping to see decreases of around €50/tonne in their September settlements.
“Styrene will not give up any margin,” said the styrene producer source. “What happens on benzene, we should be able to pass the same through on styrene. A €20-30/tonne decrease on benzene means the same on styrene should be okay.”
“On the basis of styrene and benzene last week, by my calculations we are down €50/tonne,” countered a styrene buyer, however. “I have spoken to a lot of people in the market, and that is where most people see things.”
Both producers and buyers agreed that they saw no reason for the upturn on benzene other than increased buying interest and a marginal increase in the crude oil price. With downstream demand weak, unrelentingly high prices would do little to help the market.
“We are seeing a downturn on business and there are problems,” said a source at an expandable polystyrene (EPS) producer. “There’s no reason for benzene to be valued like this - there’s plenty of supply out there.”
“Last week we were expecting September styrene to be down by €80-100/tonne, now it could be as little as €30/tonne down," said a major PS producer. “We will run our [PS] plants down further.”
Publicly settled August barge styrene contracts were finalised within a €1,260-1,286/tonne FD (free delivered) NWE (northwest ?xml:namespace>
($1 = €0.68)
For more on styrene. | http://www.icis.com/Articles/2008/08/27/9152005/benzene+hikes+stoke+styrene+players+concerns.html | CC-MAIN-2013-20 | refinedweb | 401 | 61.67 |
1) $(element) .each ()
2) $.each
There are two codes as shown above. 1) can handle multiple elements, and 2) can handle arrays and objects. I don't know what the difference is.
A) For example, it seems that $.ajax is directly connected to Jquery with a dot, so it seems to be using the method of jquery itself.
In other words, does this mean that each method or ajax method exists in the Jquery object or higher-level object to be referenced?
B) On the other hand, in the case of 1), there is a each method while wrapping a DOM element with a Jquery object. ? Or in the case of 1) in the case of the method chain concept, do you refer to each body JQuery object each?
Then, since it is strange that the processing I mentioned at the beginning is different, I feel that each method in $(element) object is different from each method in Jquery object body. .
I would be happy if someone could give me some advice.
Thanks for your consideration.
- Answer # 1
- Answer # 2
The behavior is the same, some samples
<script> $(function () { $('. hoge'). each (function () { console.log ($('. hoge'). index ($(this)) + ":" + $(this) .text ()); }); $.each ($('. hoge'), function () { console.log ($('. hoge'). index ($(this)) + ":" + $(this) .text ()); }); </script> <div>1</div> <div>2</div> <div>3</div>
The only way to rotate an array or object with jQuery is to specify it in $.each
// array $.each ([1,2,3], function (x, y) { console.log (x + ":" + y); }); //object $.each ({"a": 1, "b": 2, "c": 3}, function (x, y) { console.log (x + ":" + y); });
If this is a forEach that rotates an ordinary array, the index and data are reversed
[4,5,6] .forEach (function (x, y) { console.log (x + ":" + y); }); [] .forEach.call ([4,5,6], function (x, y) { console.log (x + ":" + y); }); // ForEach does not mix objects [] .forEach.call ({"a": 4, "b": 5, "c": 6}, function (x, y) { console.log (x + ":" + y); });
By the way, NodeList and HTMLCollection work differently
<script> window.addEventListener ('DOMContentLoaded', function (e) { / * NodeList * / [] .forEach.call (document.querySelectorAll ('. hoge'), function (x, y) { console.log (y + ":" + x.textContent); }); document.querySelectorAll ('. hoge'). forEach (function (x, y) { console.log (y + ":" + x.textContent); }); / * HTMLCollection * / [] .forEach.call (document.getElementsByClassName ('hoge'), function (x, y) { console.log (y + ":" + x.textContent); }); // You may not pick up below document.getElementsByClassName ('hoge'). forEach (function (x, y) { console.log (y + ":" + x.textContent); }); }); </script> <div>1</div> <div>2</div> <div>3</div>
Related articles
- difference between jquery $("#id") and $("div#id) acquisition results
- c # - what is the difference between readblockasync and readasync in streamreader?
- i want to know why there is a difference between the appearance of checkboxes for activex controls and form controls on excel
- Explain the difference between the two assignments of class variables in Python
- Explain the difference between return and yield in python
- Simple understanding of the difference between JAVA public class and class
- The difference between key and index in MySQL
- php - [laravel] difference between {{('string')}} and {{__ ('string')}}, should form's action be html escaped?
- Analysis of the difference between struct and class in Swift (assembly analysis low-level analysis)
- Analysis of the difference between Spring BeanFactory and FactoryBean
- In-depth analysis of the difference between static and templates in springboot
- Analysis of the difference between Python range and enumerate function
- What is the difference between CDN, SCDN and DCDN for website acceleration? how to choose?
- The difference between java String, StringBuilder and StringBuffer
- google sheets - how to take the difference between two numbers with arrayfomura and sum (or minus)
- The difference between echo and print in PHP
- Learn the difference between Python str () and repr () through examples
- Explain the difference between atan and atan2 under the math module in python
- Simple understanding of the difference between JAVA SimpleDateFormat yyyy and YYYY
- The difference between make (chan int, 1) and make (chan int) in Go
-
jquery-3.3.1.js
I just called
jQuery.each.
By passing
thisas an argument, you are executing
eachlimited to
$(element).
Although it is different on the implementation, in fact, it is just doing the same process.
You can read the jQuery source at the top link.
I think that reading it all will be a good study. | https://www.tutorialfor.com/questions-100541.htm | CC-MAIN-2020-45 | refinedweb | 701 | 57.47 |
I wanted to implement a notification message in one of my projects, similar to what you’d see in Google Docs while a document is saving. In other words, a message shows up indicating that the document is saving every time a change is made. Then, once the changes are saved, the message becomes: “All changes saved in Drive.”
Let’s take a look at how we might do that using a boolean value, but actually covering three possible states. This definitely isn’t the only way to do this, and frankly, I’m not even sure if it’s the best way. Either way, it worked for me.
Here’s an example of that “Saving…” state:
…and the “Saved” state:
Using a
Boolean value for to define the state was my immediate reaction. I could have a variable called
isSaving and use it to render a conditional string in my template, like so:
let isSaving;
…and in the template:
<span>{{ isSaving ? ‘Saving...’ : ‘All changes saved’ }}</span>
Now, whenever we start saving, we set the value to
true and then set it to
false whenever no save is in progress. Simple, right?
There is a problem here, though, and it’s a bit of a UX issue. The default message is rendered as “All changes saved.” When the user initially lands on the page, there is no saving taking place and we get the “Saved” message even though no save ever happened. I would prefer showing nothing until the first change triggers the first “Saving” message.
This calls for a third state in our variable:
isSaving. Now the question becomes: do we change the value to a string variable as one of the three states? We could do that, but what if we could get the third state in our current boolean variable itself?
isSaving can take two values:
true or
false. But what is the value directly after we have declared it in the statement:
let isSaving;? ItR17;s
undefined because the value of any variable is
undefined when it’s declared, unless something is assigned to it. Great! We can use that initial
undefined value to our advantage… but, this will require a slight change in how we write our condition in the template.
The ternary operator we are using evaluates to the second expression for anything that can’t be converted to
true. The values
undefined and
false both are not
true and, hence, resolve as
false for the ternary operator. Even an if/else statement would work a similar way because
else is evaluated for anything that isn’t
true. But we want to differentiate between
undefined and
false . This is fixable by explicitly checking for
false value, too, like so:
<span> {{ isSaving === true ? ‘Saving...’ : (isSaving === false ? ‘All changes saved’: ‘’) }} </span>
We are now strictly checking for
true and
false values. This made our ternary operator a little nested and difficult to read. If our template supports if/else statements, then we can refactor the template like this:
<span> {% if isSaving === true %} Saving... {% elseif isSaving === false %} All changes saved {% endif %} </span>
Aha! Nothing renders when the variable is neither
true nor
false — exactly what we want!
>. | https://linksoftvn.com/undefined-the-third-boolean-value/ | CC-MAIN-2019-18 | refinedweb | 530 | 70.94 |
Crash in mozilla::Media
Stream Graph::Notify Output Data since Firefox 49
VERIFIED FIXED in Firefox 49
Status
()
P1
critical
Rank:
5
People
(Reporter: philipp, Assigned: jesup)
Tracking
({crash, regression, sec-critical})
Firefox Tracking Flags
(firefox48 unaffected, firefox49+ verified, firefox-esr45 unaffected, firefox50+ verified, firefox51+ verified)
Details
(crash signature)
Attachments
(1 attachment, 1 obsolete attachment)
This bug was filed from the Socorro interface and is report bp-c3d75891-a31e-49da-ab1c-a8ebe2160809. ============================================================= Crashing Thread (98) Frame Module Signature Source 0 xul.dll mozilla::MediaStreamGraph::NotifyOutputData(float*, unsigned int, int, unsigned int) dom/media/MediaStreamGraph.cpp:1256 1 xul.dll mozilla::AudioCallbackDriver::DataCallback(float const*, float*, long) dom/media/GraphDriver.cpp:911 2 xul.dll mozilla::AudioCallbackDriver::DataCallback_s(cubeb_stream*, void*, void const*, void*, long) dom/media/GraphDriver.cpp:772 3 xul.dll noop_resampler::fill(void*, long*, void*, long) media/libcubeb/src/cubeb_resampler.cpp:54 4 xul.dll `anonymous namespace'::refill media/libcubeb/src/cubeb_wasapi.cpp:529 5 xul.dll `anonymous namespace'::refill_callback_duplex media/libcubeb/src/cubeb_wasapi.cpp:737 6 xul.dll `anonymous namespace'::wasapi_stream_render_loop media/libcubeb/src/cubeb_wasapi.cpp:906 7 ucrtbase.dll _crt_at_quick_exit 8 kernel32.dll BaseThreadInitThunk 9 ntdll.dll __RtlUserThreadStart 10 ntdll.dll _RtlUserThreadStart this crash signature on windows and os x seems to be regressing since firefox 49 builds - on 49.0b1 it's currently making up 0.25% of all browser crashes.
Rank: 15
Priority: -- → P1
Rank: 15 → 10
Rank: 10 → 5
e5e5 signature. Most likely mListeners needs to be locked.
Assignee: nobody → rjesup
Group: media-core-security
Component: WebRTC: Audio/Video → Audio/Video: MediaStreamGraph
Keywords: sec-critical
Which mListeners? mAudioInputs seems to always be dealt with on the MSG thread.()).
(In reply to Randell Jesup [:jesup] from comment #3) >()). It's async, but we keep a ref to the listener through the whole asyncness, see [1]. Unless there's a listener implemented without the proper addref forwarding I can't see how this is the problem. [1]
Comment on attachment 8780388 [details] [diff] [review] make mAudioInputs use RefPtrs Review of attachment 8780388 [details] [diff] [review]: ----------------------------------------------------------------- I'd be willing to r+ this, but first I think we should have a grip on what's causing this so we don't just mask it into something else.
The actual hole is the DispatchToMainThread without holding a ref... We should still hold a ref in the array, to guard against accidental missing the call to CloseAudioInput
Comment on attachment 8780525 [details] [diff] [review] make mAudioInputs use RefPtrs Approval Request Comment [Feature/regressing bug #]: full_duplex landings; Bug 1221587. full_duplex is off in 48 except for Linux. It's on for windows and mac in 49, but will be turned off on mac in 49 shortly. [Security approval request comment] How easily could an exploit be constructed based on the patch? Not easily - we're clearly adding a ref to a Dispatch, but it's a very tough hole to provoke. One might be able to flood MainThread with events to slow processing of the Dispatch, and give time for the calling thread to release the underlying object. Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem? No. Which older supported branches are affected by this flaw? 46 and later, but only if full_duplex is on. full_duplex is off for all but linux in 48; 49 will go out with windows added; 50 will go out with mac as well; and android will likely be 51. If not all supported branches, which bug introduced the flaw? 48 is the earliest that has the flaw, but we have 0 hits in crash-stats for 48. I do not recommend holding 48.0.1 for this or shipping a new point release unless we believe someone has discovered a way to exploit it on Linux. If we had to update linux, we could force the pref to false instead of changing the code. Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be? Trivial to apply. How likely is this patch to cause regressions; how much testing does it need? Extremely unlikely to cause regressions; at most it would leak memory and mochitests are green.
Attachment #8780525 - Flags: sec-approval?
Attachment #8780525 - Flags: approval-mozilla-beta?
Attachment #8780525 - Flags: approval-mozilla-aurora?
Comment on attachment 8780525 [details] [diff] [review] make mAudioInputs use RefPtrs Approvals given.
Attachment #8780525 - Flags: sec-approval?
Attachment #8780525 - Flags: sec-approval+
Attachment #8780525 - Flags: approval-mozilla-beta?
Attachment #8780525 - Flags: approval-mozilla-beta+
Attachment #8780525 - Flags: approval-mozilla-aurora?
Attachment #8780525 - Flags: approval-mozilla-aurora+
status-firefox-esr45: --- → unaffected
tracking-firefox49: --- → +
tracking-firefox50: --- → +
tracking-firefox51: --- → +
Status: NEW → RESOLVED
Closed: 3 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla51
Group: media-core-security → core-security-release
25 crashes with this signature in the last week, with 4 of them on 49 RC, 2 on Aurora and 2 on Nightly. This seems very low volume, so I assume it is safe to close it. If someone considers these results worrying, feel free to reopen.
Status: RESOLVED → VERIFIED
Still happening with UAF signature in FF 51 and 50 and 49.0b99, such as a 50.0a2 from build 20160908004007 or a 49.0b99 (buildid 20160907153016) The frequency is lower, but it's not (completely) fixed. There might be a secondary cause. We did close a clear hole, and the frequency dropped sharply. When I expand the search to start July 1, and then look at the Graphs using 'version', I see the crash rate drops to almost but not quite 0 after the fix here landed. Padenot, pehrsons - any thoughts how this could still be happening?
Status: VERIFIED → REOPENED
Flags: needinfo?(pehrson)
Flags: needinfo?(padenot)
Resolution: FIXED → ---
re-closing and remarking verified - I spun off a bug to handle the remaining crashes which likely have a slightly different source. The fix here clearly closed the primary hole.
Status: REOPENED → RESOLVED
Closed: 3 years ago → 3 years ago
Flags: needinfo?(pehrson)
Flags: needinfo?(padenot)
Resolution: --- → FIXED
Crash volume for signature 'mozilla::MediaStreamGraph::NotifyOutputData': - nightly (version 52): 1 crash from 2016-09-19. - aurora (version 51): 2 crashes from 2016-09-19. - beta (version 50): 46 crashes from 2016-09-20. - release (version 49): 366 crashes from 2016-09-05. - esr (version 45): 0 crashes from 2016-07-25. Crash volume on the last weeks (Week N is from 10-17 to 10-23): W. N-1 W. N-2 W. N-3 W. N-4 - nightly 0 1 0 0 - aurora 0 1 1 0 - beta 12 15 10 3 - release 91 63 128 37 - esr 0 0 0 0 Affected platform: Windows Crash rank on the last 7 days: Browser Content Plugin - nightly - aurora - beta #1315 #581 - release #807 #236 - esr
status-firefox52: --- → affected
Group: core-security-release
Updating overall status based on Comment 16.
Status: RESOLVED → VERIFIED | https://bugzilla.mozilla.org/show_bug.cgi?id=1293976 | CC-MAIN-2019-30 | refinedweb | 1,150 | 50.73 |
A simple downloader written in Python with an awesome progressbar.
Project description
downloader-cli
A simple downloader written in Python with an awesome progressbar.
Installation
The package is available in PyPi here
Install it using
pip install downloader-cli
If you want to manuall install, clone the repo and run the following command
sudo python setup.py install
The packages available in PyPi and AUR contain the last release, if you want all the latest changes, clone the repo and install manually or wait for the next release.
Requirements
downloader-cli requires just one external module.
Usage
The script also allows some other values from the commandline.
usage: dw [-h] [-f | -c] [-e] [-q] SOURCE [TARGET] positional arguments: SOURCE URL of the file TARGET target filepath (existing directories will be treated as the target location) optional arguments: -h, --help show this help message and exit -f, -o, --force overwrite if the file already exists -c, --resume resume failed or cancelled download (partial sanity check) -e, --echo print the filepath to stdout after downloading (other output will be redirected to stderr) -q, --quiet suppress filesize and progress info
Use It
Want to use it in your project?
Import the
Download class using the following.
from downloader_cli.download import Download Download(url).download()
Above is the simplest way to use it in your app. The other arguments are optional.
Arguments
The module takes 8 arguments. Only one is required though.
NOTE For details regarding the arguments, check Usage
NOTE In case the file size is not available, the bar is shown as indefinite, in which case the icon_left by default space(
" ").
Other examples
In case you want to experiment with the progress bar's icons, here's some examples.
This is when I passed
icon_doneas
#and
icon_leftas space.
In case a file's size is not available from the server, the progressbar is indefinite.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/downloader-cli/ | CC-MAIN-2019-51 | refinedweb | 339 | 53.92 |
NAM .
Library interfaces and system callsIn most cases the mq_*() library interfaces listed above are implemented on top of underlying system calls of the same name. Deviations from this scheme are indicated in the following limitThe RLIMIT_MSGQUEUE resource limit, which places a limit on the amount of space that can be consumed by all of the message queues belonging to a process's real user ID, is described in getrlimit(2).
Mounting the message queue filesystem
These fields are as follows:
- QSIZE
- Number of bytes of data in all messages in the queue (but see BUGS).
- NOTIFY_PID
- If this is nonzero,.
Linux implementation of message queue descriptorsFor a discussion of the interaction of POSIX message queue objects and IPC namespaces, see ipc_namespaces(7)..
Linux does not currently (2.6.26) support the use of access control lists (ACLs) for POSIX message queues.
BUGS. | https://man.archlinux.org/man/mq_overview.7.en | CC-MAIN-2021-10 | refinedweb | 143 | 51.48 |
Deprecated. For new scripts, please prefer the
XmlService instead.
A name in an xml document.
Names are composed of two components. A (possibly-empty) namespace along with
a local name.
Names are unique and will be represented with singleton objects. This means
that equality can be computed very quickly with a simple reference equality
check.
In the case of
<foo> the namespace will be empty and the local name
will be
"foo".
In the case of
<ns0:foo xmlns: the namespace
will be
"" and the local name will be "foo".
XML names also have an 'expanded' form that is equivalent to the non-standard
QName format used by java. Specifically, it has the form: "{" +
getNamespace() + "}" + getLocalName(). This can be used when interoping with
other java XML libraries.
Methods
Detailed documentation
getLocalName()
Returns the local portion of this XML name.
Return
String — the local portion of this Xml name
getNamespace()
Returns the namespace portion of this XML name.
Return
String — the namespace portion of this XML name | https://developers.google.com/apps-script/reference/xml/xml-name | CC-MAIN-2014-10 | refinedweb | 166 | 68.16 |
Lock a mutex
#include <pthread.h> int pthread_mutex_lock( pthread_mutex_t* mutex );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically..
This function's behavior when you try to lock a mutex that you already own depends on the type of the mutex. For more information, see the entry for pthread_mutexattr_settype(). pthread_mutex_unlock() for each corresponding call to lock the mutex.
If a signal is delivered to a thread that's waiting for a mutex, the thread resumes waiting for the mutex on returning from the signal handler.
If, before initializing the mutex, you've called pthread_mutexattr_setwakeup_np() to enable wake-ups, you can later call pthread_mutex_wakeup_np(), to wake up any threads that are blocked on the mutex. The "np" in these functions' names stands for "non-POSIX."; } | http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_mutex_lock.html | CC-MAIN-2022-27 | refinedweb | 134 | 57.06 |
Introduction
This is the branch article about Deep Reinforcement Learning.
Main contribution is here.
In this article, I would like to summarise a famous research paper, deep q network.
Also, i would like to share the implementation as well!
Hope you will like it!
DQN in nutshell
Mnih et al (2015) have discovered the novel approach combining the deep neural network architecture supporting the function approximation. Before explaining the algo, let me sharply cover what things made their achievement great is the three points summarised below.
- Stabilisation/Convergence of training with experience replay
- Domain knowledge free model(no need to put any prior information or domain knowdge)
- Generalisation(DQN performs very well in most of Atari games)
Model arcitecture
Input(84x84x4 image) - > conv -> ReLU -> conv -> ReLU -> conv -> ReLU -> FC -> FC -> softmax -> output(action-value func)
Important Techniques
Reward format: 1 for positive, -1 for negative and 0 for neutral/unchanged
Frame Skip: denoted by $k$, this param means the frequency for the agent to check the image and decide the action.
Since I got stack at this point in the paper, let me elaborate more.
With frame skip param, agent will only do new action on every $k$ th frames.
During the skipped term, they just repeating the same action over and over again
!
But how to separate or process the frames in the game??
Then we have good example here quoted from Seita's Good Research
Firstly, please take a look at the chunk of images below.
These are the bare screenshots of the Atari-game.
And with the skipping frame, we can technically skip and ignore the consecutive three frames.
However, you might wonder if it's okay for the agent to loose such a significant amount of the information about the state of the game. Indeed, I had the same question as well!! And now I can tell you that it is okay because at least the implementation has taken rewards during the skipped term into account by summing them. So I would recommend you to refer to here for the implementation!
As for the implementation of the reward handling, it is done in the method named _step in ale_experiment.py by the guy Daniel mentioned in the article!
def _step(self, action): """ Repeat one action the appopriate number of times and return the summed reward. """ reward = 0 for _ in range(self.frame_skip): reward += self._act(action) return reward
and this outcome, which reward is used in the method run_episode in the same script!
def run_episode(self, max_steps, testing): """Run a single training episode. The boolean terminal value returned indicates whether the episode ended because the game ended or the agent died (True) or because the maximum number of steps was reached (False). Currently this value will be ignored. Return: (terminal, num_steps) """ self._init_episode() start_lives = self.ale.lives() action = self.agent.start_episode(self.get_observation()) num_steps = 0 while True: reward = self._step(self.min_action_set[action]) self.terminal_lol = (self.death_ends_episode and not testing and self.ale.lives() < start_lives) terminal = self.ale.game_over() or self.terminal_lol num_steps += 1 if terminal or num_steps >= max_steps: self.agent.end_episode(reward, terminal) break action = self.agent.step(reward, self.get_observation()) return terminal, num_steps
Anyhow with skipping frames, we can ensure the more efficient learning.
FYI
if you didn't implement the skip frame, then it would be like this.
In contrast, if you did, it would be like this!
So the movement is a bit looking rocky!
Implementation | https://qiita.com/Rowing0914/items/d1edf7df1df559792f62 | CC-MAIN-2018-47 | refinedweb | 575 | 66.74 |
Java Exploits
Last Updated: 2010-11-11 00:05:00 UTC
by Daniel Wesemann (Version: 1)
The recent Java JRE patch bundle released by Oracle contained a long list of security fixes, several of which for vulnerabilities that allow drive-by exploits. And since Java is present on pretty much every Windows PC, and people don't seem to do their Java updates quite as diligently as their Windows patches, there are A LOT of vulnerable PCs out there. Microsoft reported on this a month ago, and called it an "unprecedented wave of Java exploiting".
It doesn't look like the situation has improved since, and the bad guys are taking advantage. Not surprisingly, the FAQ document on "Virus found in my Java Cache Directory" is ranked third most popular of all the issues listed on. The two issues ranked ahead of it are also security concerns.. not a pretty picture for Oracle or Java, I'd say.
Let's take a look at one of the popular exploits that are making the rounds, the "bpac" family. The exploit used is for CVE-2010-0840 (Hashmap), already covered by the Java patch bundle in July, but apparently still successful enough to be used. I guess the bad guys won't start "burning" their newest Java exploits while the old set is still going strong.
The infection usually happens as follows:
(1) User surfs to website that has been injected with the exploit
(2) Exploit pack triggers - it comes as an obfuscated JavaScript that downloads an Applet and a PDF
(3) The applet contains an exploit, here for CVE-2010-0840
(4) The applet is invoked with a parameter that tells it where to find the EXE
(5) If the exploit is successful, the EXE is downloaded and run
The EXEs pack quite a punch - one recent sample submitted contained no less than 66 individual other malicious EXEs. Yes, a user would be bound to notice this deluge of badness, but he still wouldn't stand a chance to ever clean ALL of this crud off the system again.
Looking at the malware in more detail
-rw-r--r-- 1 daniel users 3738 2010-11-08 09:14 euinirascndmiub.jar
-rw-r--r-- 1 daniel users 21009 2010-11-08 09:13 fuiqaubuk7.php
-rw-r--r-- 1 daniel users 6095 2010-11-08 09:14 jmkohwbrbtgsboj.pdf
The PHP file invokes the applet with parameter
daniel@debian:~/malware$ head fuiqaubuk7.php
<body id='jmery7' name='jmery7'><applet code='bpac.a.class' archive="euinirascndmiub.jar"><param value='RSS=,TT$XINOIAX$IOJTG@HTRMDAI=R=' ame="a"/></applet></body><textarea>function goyla(hrcsyoe6){r .....
The JAR file .. is basically a ZIP, so we can unzip it:
daniel@debian:~/malware$ unzip euinirascndmiub.jar
Archive: euinirascndmiub.jar
inflating: META-INF/MANIFEST.MF
inflating: bpac/a$1.class
inflating: bpac/a.class
inflating: bpac/b.class
inflating: bpac/KAVS.class
From the PHP, we know that "a.class" is the code that gets executed. A Java Decompiler like "jad" can be used to convert the java class files back into something readable akin to Java source code:
daniel@debian:~/malware/bpac$ jad *.class
Generating a.jad
Generating b.jad
Generating KAVS.jad
Generating a$1.jad
On inspection, a.jad indeed contains the CVE-2010-0840 exploit, pretty much a carbon copy of the Metasploit original. More interesting is b.jad, because it contains
String s1 = (new StringBuilder()).append(s.replace("F", "a").replace("#", "b").replace("V", "c").replace("D","d").replace("@", "e").replace("Y", "f").replace("C", "g").replace("R", "h").replace(";", etc
which sure looks like a decoding function. It doesn't take much programming to turn this into a Java file of its own with a "print" statement at the end. When we then add the variable that was set when the applet was invoked, we get
public class x
{
public static void main(String[] args)
{
String s = "RSS=,TT$XINOIAX$IOJTG@HTRMDAI=R=";
String s1 = (new StringBuilder()).append(s.replace("F", "a").replace("#", "b").replace("V","c").replace("D", "d").replace("@", "e").replace("Y", "f").replace("C", "g").replace("R", "h").replace(";","i").replace("L", "j").replace("K", "-").replace("U", "k").replace("^", "l").replace("Z", "m").replace("B","n").replace("Q", "o").replace("=", "p").replace("&", "q").replace("M", "r").replace("G", "s").replace("S","t").replace("!", "u").replace("W", "v").replace("%", "w").replace("H", "x").replace("P", "y").replace("?","z").replace("T", "/").replace("I", ".").replace("K", "_").replace("(", "_").replace(",", ":").replace("A","1").replace("N", "2").replace("*", "3").replace("J", "4").replace(")", "5").replace("O", "6").replace("$","7").replace("X", "8").replace("+", "9").replace("E", "0")).append("?i=1").toString();
System.out.println(s1);
}
}
Compile with javac, run with java, and lookie, the system prints:
daniel@debian:~/malware/bpac$ java x. 26.187. 64/sex/hrd1.php?i=1 (spaces added to keep you from clicking, careful, still live!)
which is where the EXE resides. Virustotal currently has it with 14/43.
Bottom line: If you haven't done so yet, hunt down and patch every incarnation of Java on the PCs that you are responsible for. | https://dshield.org/diary/Java+Exploits/9916 | CC-MAIN-2022-33 | refinedweb | 855 | 56.76 |
To read a string in a file we can use fgets () whose prototype is:
char * fgets (char * str, int size, FILE * fp);
The function takes three arguments: the string to be read, the maximum limit of characters to read and the pointer to FILE, which is associated with the file where the string is read. The function reads the string until a newline character is read or size-1 characters have been read. If the newline character ('\n') is read, it will be part of the string, which did not happen with gets. The resulting string will always end with '\0' (for this only size-1 characters maximum, will be read).
The fgets function is similar to gets (), however, beyond power to read from a data file and include the newline character in the string, it also specifies the maximum size of the input string. As we have seen, the gets function had this control, which could lead to errors of "buffer overflow". Therefore, taking into account that the fp pointer can be replaced by stdin, as we saw above, an alternative to the use of gets is to use the following construction:
fgets (str, size, stdin);
where str and 'the string that you are reading and size should match the size allocated for the string minus 1 because of' \0 '.
fputs
prototype:
char * fputs (char * str, FILE * fp);
Write a string to a file.
Ferror and perror
Prototype ferror:
int ferror (FILE * fp);
The function returns zero if no error occurred and a number other than zero if an error occurred while accessing the file.
ferror () becomes very useful when we want to ensure that each access to a file succeeded, so that we can ensure the integrity of our data. In most cases, if a file can be opened, it can be read or written. However, there are situations where this does not occur. For example, you end up disk space while recording, or the disc may be faulty and can not read, etc.
A function which can be used in conjunction with ferror () is the perror () function (print error), whose argument is a string which typically indicates that the problem of the program has occurred.
In the following example, we use ferror, perror and fputs
#include <stdio.h>
#include <stdlib.h>
#include<conio.h>
void main ()
{
FILE * pf;
char string [20];
clrscr();
if ((pf = fopen ("Hello.txt", "w")) == NULL)
{
printf ("\ Program can not open the file!");
exit (1);
}
do
{
printf ("\nPlease enter a new string Finally, type <enter>:.");
gets (string);
fputs (string, pf);
putc ('\n', pf);
if (ferror (pf))
{
perror ("Error writing");
fclose (pf);
exit (1);
}
} while (strlen (string)> 0);
fclose (p | http://ecomputernotes.com/what-is-c/file-handling/ferror-perror-fputs | CC-MAIN-2020-05 | refinedweb | 445 | 66.17 |
writing MJPG file by VideoWriter in Visual Studio x64/Release platform
I am trying the following code in Windows 7 pro; Visual Studio 2012 Express with x64/Release configuration using the pre-built OpenCV 2.4.8 and 2.4.5 (x64/vc11). In my environment, EXPECT_EQ fails.
#include "opencv2/opencv.hpp" void EXPECT_EQ(double a, double b) { if (a != b) { std::cout << "failed" << std::endl; } } int main() { double current_time = 0.0; cv::Mat blank = cv::Mat::zeros(480, 640, CV_8UC3); cv::VideoWriter writer("output.avi", CV_FOURCC('M','J','P','G'), 50.0, blank.size(), true); current_time += 20.0; writer << blank; EXPECT_EQ(20.0, current_time); return 0; }
This happens only when I write MJPG format files as far as I tried. The problem does not reproduce if I choose Win32 (x86) or Debug configurations.
Any help or comments are appreciated. Thank you very much in advance. | https://answers.opencv.org/question/31357/writing-mjpg-file-by-videowriter-in-visual-studio-x64release-platform/ | CC-MAIN-2021-17 | refinedweb | 146 | 53.98 |
* article elaborated in partnership with Andreia Camila da Silva
In the previous post [But why Go?] we’ve talked about the benefits of the Go language. In this continuation, we’re going to discuss the Go language structure.
To understand the structure of a Go program, we need to analyze its code, so let’s take the hello world from our previous post.
/code
package main
import “fmt”
func main() {
fmt.Printf(“Hello World\n”)
}
/code
The first line of code is the declaration of a package that is similar to modules and libs in other languages
The packages facilitate the process of division of responsibilities. A package consists of one or more .go files.
Every Go program must belong to a package and that specific program belongs to the “main” package. In this case, the combination of the main package declaration and the main function makes a Go program executable and independent.
The main package as well as the main function are special. The main package defines an executable program rather than a modular one. In this case, the main package and the main function are the beginning of the program’s execution and are similar to the Java language and the C language itself.
The Go standard library has over 100 packages for common input (data input) and output (data output) style tasks, sorting and text manipulation. The fmt package, for example, has both input and output functions, Println being one of the basic output functions.
As said earlier, Go is a compiled language. This means that it converts a program and all its dependencies into a computer’s machine language.
The run command executes our initial code, as follows:
/code
$ go run hello.go
/code
It should display the following line
/code
$ Hello World
/code
Go handles Unicode natively and this makes it easy for it to process text in any language.
To create a compiled program result, just run go build
/code
$ go build hello.go
/code
This will generate a binary executable file, named hello, which can be run at any time without any additional processes, as follows
/code
$ ./hello
Hello World
/code
Golang in 20 minutes by Wesley Willians
The Go Programming Language by Alan A. A. Donovan and Brian W. Kernighan
Do you want to know more? Talk to one of our specialists, just fill the form below and soon we will get in touch ;) | https://www.sensedia.com/post/estrutura-da-linguagem-go | CC-MAIN-2021-31 | refinedweb | 402 | 63.39 |
support option persisting of enum values, not just the keys
Here's some sample code:
import enum import sqlalchemy as sa engine = sa.create_engine('postgresql+psycopg2://{}'.format('your database goes here')) class Status(enum.Enum): new = 'N' active = 'A' inactive = 'I' suspended = 'S' engine.execute(sa.text('CREATE TEMPORARY TABLE fnord (name text, status text);')) fnord = sa.Table( 'fnord', sa.MetaData(), sa.Column('name', sa.Text), sa.Column('status', sa.Enum(Status, native_enum=False)), ) engine.execute(sa.insert(fnord), [ {'name': 'new', 'status': 'N'}, {'name': 'new2', 'status': Status.new}, {'name': 'active', 'status': 'A'}, {'name': 'active', 'status': Status.active}, {'name': 'inactive', 'status': 'I'}, {'name': 'inactive', 'status': Status.inactive}, {'name': 'suspended', 'status': 'S'}, {'name': 'suspended', 'status': Status.suspended}, ]) for result in engine.execute(sa.text('select * from fnord;')): print('{0.name} - {0.status}'.format(result)) query = sa.select([ fnord.c.name, fnord.c.status.in_((Status.new, Status.active)).label('is_active'), fnord.c.status, ]).select_from(fnord) print(Status('N')) for result in engine.execute(query): print('{0.name} - {0.status} - {0.is_active}'.format(result))
My expectation is that SQLAlchemy would be inserting the values of the Enum. But it doesn't - it inserts the keys instead. The documentation unfortunately uses this ambiguous example:
class MyEnum(enum.Enum): one = "one" two = "two" three = "three"
So I had no idea that it would perform exactly backwards from how I expected. In my example you'll get this output:
new - N new2 - new active - A active - active inactive - I inactive - inactive suspended - S suspended - suspended Status.new
Followed by an exception as SQLAlchemy tries to find the Enum
Status.N, rather than
Status('N'). You can work around this if you have enum values that are valid Python identifiers by aliasing your Enum values like so:
class Status(enum.Enum): new = 'N' N = 'N'
The order is important here.
An alternative workaround, rather than using
sa.Enum:
class Enum(sa.types.TypeDecorator): impl = sa.Text def __init__(self, enumtype, *args, **kwargs): super().__init__(*args, **kwargs) self._enumtype = enumtype def process_bind_param(self, value, dialect): return value.value def process_result_value(self, value, dialect): return self._enumtype(value)
Though you can't insert to the table using
'N', you have to use the enum, which is totally fine.
I think at the very least the documentation needs to be updated to something less ambiguous. Does SQLAlchemy use the value of the enum at all? Something like
class Numbers(enum.Enum): one = "one" two = "two" five = "three" connection.execute(t.insert(), {"value": MyEnum.five}) assert connection.scalar('SELECT value FROM data) == 'five'
Would at least make it possible to gather from the documentation what to expect.
For my own edification - why does SQLAlchemy insert the key of the enum instead of the value? I haven't been able to find any discussion/rationale - the only thing I can come up with is that it's sort of consistent with the DeclEnum example. Obviously it's not what I expected, though.
well we go straight from pep435 () and use
__members__generically as possible. Because note, the "values" of the Enum can be any Python object, not just a string:
if you want the values to be persisted, I'd favor a new flag added to Enum to support this. Would need new docs, tests, etc.
tentative.
for the docs, these do need to have a little bit of clarification, something along the lines of "note that only the enum keys are actually persisted; the values of each enum object are ignored". PRs welcome for those in the interim.
to get what you want right now you'd need to make a new object that has some alternate style of
nevermind the doc part I added a line | https://bitbucket.org/zzzeek/sqlalchemy/issues/3906/support-option-persisting-of-enum-values | CC-MAIN-2017-43 | refinedweb | 615 | 51.95 |
Remarks by Jim Allchin, Group Vice President, Platforms, Microsoft Corporation"The Next Step for the Windows Platform"Microsoft Professional Developers Conference 2005Los Angeles, CaliforniaSeptember 13, 2005
ANNOUNCER: Ladies and gentlemen, please welcome Microsoft's Group Vice President, Platforms, Jim Allchin. (Applause.)
JIM ALLCHIN: We're going to have a lot of fun in the next two hours. First, we're going to have a little trip down memory lane. How many of you remember Windows 1.0? How many of you? (Applause.) Well, it's hard to keep it in perspective, it's been so long ago. So I went to the company museum and had one shipped down here. Come on over here. I want to make sure you get a good look at this.
Here's an IBM XT. Can you guys see this? So I have Windows 1.0 on it here. I'm just going to start it up. Look at this thing.
See this hourglass; they didn't make hardware fast enough back then either.
Now I want to bring up one of the games to give you an idea of the graphics power we had.
So why do I bring this up? Because I want to drive home how far the industry has come, and the second is it's because of you that Windows has been so successful, and I want to thank you for all the work that you've done to make the platform successful. (Applause.) It's you!
So this is the beginning of the third decade, that's right, this year is the 20th anniversary for Windows. It's awe-inspiring growth. During the first six years of Windows 1 through Windows 3, the industry shipped give or take about 30 million PCs. Fast forward to 1995, that year we shipped give or take about 60 million. Think about this: During the first 90 to 120 days after [Windows] Vista ships, we'll surpass that number of PCs shipped. It's mind-blowing.
Think about this: Each year about 500 million people go and buy Windows-related devices. That's massive opportunity for us.
In terms of growth, in terms of just raw numbers, Windows has expanded into all nooks and crannies, if you will. It's in embedded devices, it's in all types of laptops, all types of standard PCs, it's in your living room with the Media Center Edition, it's in portable media devices, it's in the phone. It's amazing the type of progress that we've had in that space.
The apps from you, incredible line of business apps, incredible games, incredible communications and incredible entertainment.
Now, whether it was back in the early days of 1985 or whether it's today, the first movers are the ones who have had the biggest advantage. I want to show you a first mover for Windows Vista. There's a company, an innovative game company called CryTek, that's going to have a game that's going to ship simultaneously with Windows Vista. Now, actually they won't let me show you the game today, but they will let us show you a video showing some of the power of the platform.
You have to understand that the PC will always be a head of the game consoles in terms of the power just because it's an extensible platform and can keep evolving.
So we're going to roll the video in just a minute, and I want you to look at the realism that is possible. I want you to look at the shadows, the light, the way the leaves move in the wind. Can we roll that video?
(Video segment.)
(Applause.)
JIM ALLCHIN: Now, you could think of this as the culmination of the last 20 years or you could be like me that thinks about it as this is just the beginning of the next 20 years.
And that's what we're going to talk about today. You've seen flashy demos, but for the next two hours we're going to get a little bit dirtier in bits and bytes. I'm going to talk about the Windows platform and give you an overview, I'm going to talk about the four pillars that I talked about at the last PDC, the same four pillars. We're going to do a programming lab; that's right, we're bringing it back, popular demand by some of the architects to come out and show you code samples. We're also going to show you real apps, both a sample app and one from an actual company that's using the [Windows] Vista platform. Lastly, I'm going to talk about the opportunity for you and for Microsoft together.
Windows Vista Timeline
I stood here two years ago in this convention center as we were starting the journey for Windows Vista. Since then, we've released two versions of the Media Center, two versions of Tablet, we've released the 64-bit client, 64-bit servers, new versions of the server, a massive update of Windows XP SP 2 for our customers, and we've shipped Windows Vista Beta 1.
Now at that PDC, I talked about more transparency and we have been following through on that. We gave you the bits in the early alpha stages of "Longhorn" and that feedback was incredibly valuable to us. We've also been providing Community Technical Previews, we've provided four of them so far. We started back in November where we're trying to get the bits out to you so you can give us the feedback..
The PDC is an important milestone for us. We've got two more after this before we're going to be shipping.
Beta 2 will be coming as soon as we feel like we've got it in the bag in terms of the basic set of features, and we will go broad with that to a wide variety of customers, and for all the audiences.
We feel very confident about broad availability by the end of 2006. Now, why do we feel that confidence? We feel the confidence because we re-did the way we were building Windows. During the last two years, we completely re-engineered engineering, and maybe in some of these talks around here, and birds of a feather, we can spend time explaining how we did that, because it's given us more confidence to give out community test previews, and to know that the builds are going to be of quality. And every day we're cranking out a new build. It gives us the confidence to know that we're going to be able to hit the end of the year.
Now, everybody talks about the Internet. Most people don't talk about the things happening around the Internet. Both the Internet and all these fighting around the edge of the network is the backdrop for Windows Vista. The Internet is the thing we all live in, float in, it's both a destination, but it's also a pipe, and I don't think people understand how incredible that pipe is, and what that's going to mean.
Now, I want to talk about why I think the edge is so exciting. The proof point, first hardware abundance, just think of the power of the CPU. Most smart phones have more power than that machine over there today. Think about this graphics processing unit, incredible power, massive storage, incredible communications bandwidth. How about all the form factors, everything from the living room, nice A/V rack level systems, to game players, and phones that are basically cosmetic accessories, these tablets where we've made handwriting the next step in the computing, amazing form factors.
Peer to peer is another change that's happening in the industry. And I'm not talking about just music, I'm talking about what's happening in terms of being able to share photos, or doing video calls, or even high performance computing, that's all about peer-to-peer technology, and that's happening because the Internet is just a pipe, and the happening area is on the edge.
The user model is also changing. It used to be on the Internet you would just browse around. Then search became the thing. And today, I think it's more subscriptions, where you're actually going and getting the information once that you need, and having the information come to you so that you can act on that information, and do the things that you want to do locally. And you will be acting on that information locally, because there's so much power there. You're not going to be editing some video, home video, across the Internet, for a very long time.
I also want to point out that even if you're a destination on the Internet, if you look at what's happening, those services are spending their time trying to differentiate on the edge, because they want that user interface differentiation. Finally, because there's so much happening on the edge, it's also a malicious attack point, and it's something that we, as the operating system company, and the platform provider, along with you, have to be very knowledgeable of. Now, the Windows platform was designed specifically for this environment. This picture of the Windows platform, covering these four diagrams. Do you remember this? This is the same general picture I showed two years ago.
Now, why are we unique? We're unique because we have an end-to-end approach to this. All the pillars work together. Other companies may have great solutions in one pillar, but you're forced to glue together another pillar perhaps from another company. You might have a flashy presentation system from one company, but you have to get a data layer from another. In [Windows] Vista, we've got one model, one common architecture, with components to manage presentations, data, and communications all built on a safe and reliable foundation.
Architectural Principles
What I'm going to do is, I'm going to just go the next level down, and just talk about the architectural principles for each of these. First presentation, the goal of this pillar is that we want to provide every piece of technology you need for going from HTML to being smart. We include a deep compositing engine that handles vector 2-D graphics, 3-D images, text, media, all composited together in real time. And, if there's hardware, we use the hardware assist. Another important thing that we're doing in this pillar is that you will find the data is usually trapped in fairly ugly tables or grids. We're trying to unleash that and let you be able to do rich visualizations to do the binding from data, and make apps like never before.
In the data space, the goal of this pillar is to store, manage, access and find data, whether it be local on the network, or on the Internet. Now, this particular pillar had major breakthroughs since the last PDC. And I'm going to talk about that a little bit later. But the gist of it is, we know that it's important to be able to access HTML objects and relational data in a common way across multiple stores. And we think that we found an innovation to help you do that.
In the communications space, the goal is to allow apps to communicate securely, reliably, whether it's to devices, or to other people. We've made this system, this pillar protocol agnostic, a big change from the last PDC, and based on Internet standards. Now, all the other layers wouldn't be very interesting if it wasn't for the base system, and this system we're trying to do something very, very focused in Windows Vista. It's probably got more energy in this pillar than any of the others. We're trying to write better code, and have it better tested. I sort of mentioned that earlier. We're also focused on isolation, isolation at all levels, whether it be at the network level, or whether it be in terms of the way processes actually operate on the system. Great device management, and with an incredible focus on deployment and servicing. So, that's at the first level.
Windows Vista Architecture
Now, let's go deep. You can read this, right? Good, because there's going to be a test on it at the end of this conference. This slide is available on the MSDN site. It's also going to be covered in lots of the sessions. I bring it up because I think this will be helpful as you look at this diagram, I also think it's on comnet, for you to learn the different terms that we're using here so that you can map them to the sessions. [Windows] Vista, as you can see from this diagram, is a very broad and deep system. We've got substantial innovation in each of the pillars and in making them work together. Hundreds of engineers, as Bill said, are here from Microsoft to go deep. And there's something like 150 sessions on this. So, there's one other thing I wanted to mention, when you look at this slide, you'll see that I've highlighted the changes since the last PDC. So, we give an idea of the things that we've changed and learned from your feedback.
Now, I want to take the next step down again, and I want to talk about the base services. As I said, this is a prime focus area for Windows Vista, hundreds of features, and I have time to only cover a few. Let me talk about deployment and servicing for just a minute. Deployment is a big problem for corporate accounts. They get lots of images that are huge, and they have to have individual images for the different computer environments. What we've done is, we've created an imaging model so that we can pack images together, and we actually can see what you can see, so the size is much smaller. We also have offline servicing. Another aspect in the deployment is something we call NAPS, or Network Access Protection. That's the idea that you can take the machine that's been out in the wild, and bring it into an environment that's protected, and the system goes into quarantine until it passes a certain set of deployment scripts. We've also worked on customer feedback. Customer feedback is the way we have to understand what's going on. As systems would get in trouble with you, we get that information from Microsoft, and we also share that with you.
For example, some of the information that we have gotten back has shown that some of the problems aren't our software, aren't your software. In fact, they're in the hardware. One of the things we're doing in the reliability space is, we're adding hardware diagnostics, along with hardware monitoring to the system. When we start to smell something going on in memory, which may not be ECC'd, we can do something without it. Or we start to smell that the disk is starting to have faults, we can do something about it.
We're also reducing reboots, we believe, at least by 50 percent in terms of configuration, that's also something that you should work on, and let me explain why. We've added something called a Restart Manager or Reboot Manager in [Windows] Vista, and if you call that, it can help you decide whether or not certain .DLLs it will shut down certain parts of the system, and keep the rest of the system running, so you can replace certain .DLLs without the system having to reboot.
We've also added transacted storage, transacted storage across the registry, across the file system, in fact, we put basically a distributed transaction monitor in the system, and you can read your data store into it, so that you can get one transacted update across all of them.
Let me talk about security for just a minute. There's a lot of information, and I've just got a few features on the slide. One I need to talk about here on the slide is called User Account Protection. I think about it as just running the standard user. That is going to be the default in Windows Vista. We corrected the Windows Vista system so that it can operate in a limited way running the standard user. (Applause.) We need to make sure that your applications also support that, it's a very key ask that we have of you.
On the performance side, let me just give you a demo of some performance stuff. So, what I have here is a Windows Vista machine, and have you ever had a machine that just seems to get slower over time? (Applause.) Well, we're trying to do something about it. The first thing is, we're trying to automatically optimize this system so that you don't have to. Why are users having to think about defraging the disk, and reoptimizing the way the blocks are on the disk? You don't have to with Windows Vista. It's just done automatically. That's the first thing we're doing.
The second thing we're doing is, doing some great innovation, something that we call Superfetch. Now, what Superfetch does is, it enhances the virtual memory system, which typically looks over things like maybe seconds and minutes to decide the best usage of memory. What Superfetch does is look over minutes, days, hours, months, years, and it optimizes the system based on how that system has been used.
The best way to show this is I'm going to come here and I'm going to start a script and this is without Superfetch running. This is just going to go through a set of applications and start them up. This is a 512-megabyte system, it's probably got half of the memory available for applications and data, and as you can see, this would be like a Windows XP machine that was cold booted, this is what you would expect. It takes a very long time for Outlook to start, because it's also bringing in DLLs that are also used in some of the other Office applications, and it just sits and reads these in.
Now, if you were to immediately restart this in XP you'd get pretty rich performance, because the virtual memory system will immediately take advantage of the fact that those pages were there. But, what if you rebooted, and it doesn't matter how long you waited, it would be the same cost of waiting through this whole event sequence again, or what if you went away for lunch and came back, what if you hibernated, in all those cases the system was actually slowing down.
So what I'm going to do now is I'm going to make the system cold again, it's just going to clear memory, now I'm going to start Superfetch, and I'm going to bring up a monitor that I've got here and I want you to just look at the green line here. What's happening is this system is now pre-fetching all the programs and data that over a long period of time has been used on this machine, and it's just bringing it into memory. We'll remember this coming back from hibernate, we'll remember it on boot and the like. So if you don't touch you're machine it's sitting there optimizing and getting it ready to do the things that you typically do, only do them faster.
So now if I run that same script, let's just see how performance goes. I didn't even let Superfetch complete, but you can get an idea. Is this cool or what? (Applause.)
So Superfetch works great if you have a reasonable amount of memory, and it works fantastic if you have boatloads of memory. But, what if you don't have boatloads of memory? I wonder what you would do then? Well, we thought about that. We said, you know, a lot of people have these USB memory sticks. I wonder if we could take advantage of those, to make them part of the virtual memory system? I just plugged in this USB memory stick, any USB memory stick, and as soon as it recognized it, we just got another 500 megs of memory on this machine. (Applause.)
And as you can see, Superfetch is just taking advantage of all of it. Now you say, what if you pull it out, will we lose data? No, you won't, because we do write-through. What if I pull it out, can I use it in some other machine and somebody could steal my data? No, you can't, because we encrypt it. And we actually do 2x encryption, so even on fairly small keys we can take advantage of it. We think this is a great innovation that will make Windows Vista get faster over time, and make it more applicable even on a laptop where you may not be able to add the memory.
I want to move and talk about (Applause.) I'm going to show you another demo. This time it's about how we've taken applications and been able to run them in low integrity mode. I talked about standard user. And if you have standard user you know that you can't hurt the operating system and you can't hurt another user, but you sure could hurt yourself, you could wander around, get some bad information, and it could damage the data on your machine. What we really wanted to do is run certain processes in low integrity mode, so they're actually sand boxed and protected from the rest of your data. And that's exactly what we did on Windows Vista.
What I've got on this machine, it's a Windows Vista machine running a VPC, and I've got IE here, and what I'm going to do is I'm going to just start IE. And what I just did is I visited an awful, evil Web site, and as you can see it just downloaded a batch file and put it in my start up group. Now, if, in fact, there was a vulnerability in IE, this is a possibility. How about [Windows] Vista? I'm going to bring up the startup group, so you can also see it. I'm just going to go to that same Web page, and if you notice down here it says, create file failed, and in fact, the folder for the startup group is empty. Very simple demo, this is incredibly powerful, because it's not just IE.
You can create sandboxes around processes so that they can be protected from the rest of your data. We think this is an incredible innovation, it means that even if we don't, as we all can't write perfect code, even if we don't capture all the vulnerabilities, IE is still protected. Your data is still protected. We think this is an incredible innovation. (Applause.)
I would love to spend time, in fact, we had all these other demos. But, because this talk is going to go so long, I'm not going to be able to show them to you. But, if you go to some of the sessions they will be able to show you many of the demos that I wish I had time to show you here. I want to move to the next pillar, which is the presentation pillar.
Presentation Challenges
There are many challenges at this layer of the system. Our goal is for you to be able to write faster applications, make it easier for you to write. We want to improve the usability for the user, through incredible new look and feel. We want you to be able to stand out in the way you do your applications. We want to help you target different form factor devices. We want to give you facilities to do rich, along with smartness, and we want to integrate designers into the process.
There are an amazing number of expectations that customers have today. Now, Microsoft pioneered some of the best techniques for doing some of the current, state-of-the-art Web systems today. You can look at Outlook Web Access and see how much better that is, and how Hotmail is. Hotmail is going to be changed to use the same technology. But, you can get a contrast here between doing incredible things in terms of DHTML and just doing the standard things.
The next level up, people have tried to create rich islands by flopping down Active X controls and within that island it's pretty cool, but it doesn't interface with the rest of your app, or the rest of your page. In the smartclient space we have lots of innovations that we've been working on through the years, whether they be in Win Forms, Direct X, Win 32, and all the great stuff in Office. That's the world the way it is today. We know that we can do better, and we are going to do that in [Windows] Vista.
Introducing "Atlas"
One of the things that I wanted to talk to you today about is something called "Atlas." Today most browsers support DHTML, JavaScript, and HML HTTP. And many cool Web sites are now using those. Generally people talk about this as "Ajax" technology, synchronous JavaScript and XML. What it shows to me is a demand for richer experiences. If you have done any of this code today you know it's pretty hard. You have to be an expert in DHTML and JavaScript. You'd better understand all the differences in all the different browsers, and there are subtle differences. You'd better be good with some tools, because the tools aren't as great as they could be, and debugging is, frankly, pretty hard.
So what we're trying to do is create a framework, an extension of ASP.NET, that will let you build rich Web experiences without the pain that you're currently going through doing this client-side scripting. So we're providing this client framework that runs in any DHTML compatible browser, and it's deeply integrated in with the familiar tools that you're already used to. And you'll see a demonstration of that in just a minute.
Now, the Windows presentation-layer foundation is a piece of software that's composed of two pieces, an engine and a framework. Now, we think we are light years ahead of everybody in this space. In the engine, we have a single runtime that can be browser-based, forms-based, graphic, video, audio, documents, all composited together in real time as I said before. Now, what does this mean to you? And I'm not talking about just the desktop, I'm talking about your app. Have you ever tried your apps on a high DPI monitor? Does it look good? If you've written into the engine and using this technology, you don't have to think about high DPI monitors anymore, because we do the work for you. It's not special DPIs you have to use, it just happens automatically.
The other piece of the Windows Presentation Foundation, and as Bill said, this used to be called "Avalon," this framework, it's an extensible environment where you can sub-class the controls that we provide, or you can write brand new controls. You can use a declarative mark-up language called XAML, or you can use a procedural language, and we can do the rest for you.
Now, wouldn't it be nice to have a subset of this on devices so you could target them, or how about other systems? Well, we've been asked, and we're introducing something called WPF/E, the Windows Presentation Foundation/ Everywhere. Now, this is very early work, but the concept is that we're going to create an interactive experience on devices and the PC. It's a strict subset of the Windows Presentation Foundation. It's a lightweight download. The programming model is XAML, but instead of procedures like C#, we're going to use JavaScript. You have access to all the WPF tooling that you current have. And this is something that we're going to be able to start providing your feedback to us, basically now, and I think the best thing to do, to just sort of wrap together all of this sort of pillar is to see a demo.
Before I show you the demo, I'm going to show you how this all looks when you bring pieces together. We think that we have taken each of the areas that we were a little bit weak in before and expanded beyond anything anyone else in the industry has done. Let's look at all the pieces together with a demo. I'd like to invite out Darryn Dieken to walk through a demo.
Hey, Darren.
DARRYN DIEKEN: Thanks, Jim. (Applause.)
Jim, I know you love movies.
JIM ALLCHIN: I do.
DARRYN DIEKEN: And I know you use the Netflix online service to order them. It's a great service that many of the people in the audience use. There are a couple of things. We've been working with them over the last few weeks, along with their Web site design partner Resonate, to build a [Windows Vista]-exploitive application using some of the technologies that you've just talked about. There were really three things we were trying to accomplish with the application. First we have to create a much more compelling user experience, that's more usable and more interactive than what you could do today with the existing technologies on the Web. The second thing is to use software as a way to differentiate the software that they create and the experience they have from their competition. And the third thing is, being a media company, they want to run on a wide variety of devices.
So what we've done is we've created a Netflix application using Windows Presentation Foundation that runs on a PC, a Tablet, on a Media Center, as well as on a cell phone. So let's take a look.
The first thing you'll see here is this is a WPF application. Netflix exposes all of their data using RSS feeds. So what we've done is we've used WPF to create rich data visualization of the services, and to visualize that during this application. You'll notice across the top here we have a list of recommended videos that they have, a Top-10 list a Top 100. I can scroll through this list and see different information. There's the recommended list, things like that.
Let's take a look at one of these. As I hover over I can get more information on these, and you'll see some nice little animations there. I have a synopsis that tells me a little bit about the movie. I can get more information on the cast. I can even go and see a review on that, using the community of Netflix users that are out there. Again, all this data is coming in over an RSS feed. So a highly rated movie, let's add this to our queue.
Here's another one, "Mad Hot Ballroom," it looks sort of like a good one. What do you think?
JIM ALLCHIN: That sort of fits with the music you started off with.
DARRYN DIEKEN: That's right. Let's take a look, here's a new menu item called preview. I click on that, directly within the application I'm able to view high quality video. You'll notice here I can still navigate to the other menu items, see different information, go to the preview. I can control the video directly inline here. Let's add that one to our queue, as well.
I'm a huge "Sopranos" fan, and this weekend I was thinking about adding some of these. So I'm just going to go here and add a couple of these videos. Here's another one, I see, one more. I love those animations, don't you? I could just keep adding them all day long.
JIM ALLCHIN: What's this at the top?
DARRYN DIEKEN: So I add another one here just for fun. The thing across the top is called the accordion view. When I have movies in my queue I'll generally have 80 to 100 items in there, which makes it hard to visualize all those on one page. So what we've done is we've created this accordion view, which allows me to expand and contract all these things using the 3-D capabilities of "Avalon," and I can see everything within my queue. You'll also notice I have an expand tab so that if I want to see a bigger picture of what my queue looks like and reorder some of those things I'm able to do that.
We all just saw "Napoleon Dynamite" in Vale, so why don't we move that one to the back. So we can move that one back here and I'll scroll through here a little bit. Those "Sopranos" movies that we just stuck into our queue, why don't we click on those and move those to the front of the queue. So just by simply clicking a couple of buttons here, I'm able to simply manage the movies in my queue. And you'll notice here up in the accordion view the view has been updated.
One last thing on this, on this application, you'll notice here, based on the items in my queue, Netflix is making recommendations for me. So right here it's using additional screen real estate to give me suggestions on other movies that are available. A pretty cool application.
JIM ALLCHIN: Super cool. (Applause.)
DARRYN DIEKEN: So why don't we take a look at it now thank you. Why don't we go to a Tablet and take a look at what this application will look like on a Tablet. You'll notice here that is it on the screen? Not yet.
So what you'll notice here is it's the same application that we had before. This is running on a new Gateway Tablet using a widescreen display. And the nice thing about Windows Presentation Foundation is it's able to use the screen real estate that's available with very little developer coding. So in this case, WPF has automatically relaid out the images of the videos to take advantage of that screen real estate, and you'll notice here I can use a stylus, a very simple stylus as my input mechanism.
So here I can click on some of these and get more information, just like I did in the other application. I can see the synopsis, I can see the rating, and I can even add that to my queue.
And you notice the other thing I want to be able to do is to use this stylus on this cool accordion view up here, and so as I hover over this thing I'm able to just with a simple stylus navigate that particular accordion view.
So it's pretty cool, huh?
JIM ALLCHIN: That's great. How about on devices?
DARRYN DIEKEN: Well, there's one more thing.
JIM ALLCHIN: Oh, how about the Media Center?
DARRYN DIEKEN: Exactly. How do you like my new big-screen Media Center back here?
JIM ALLCHIN: Is this yours?
DARRYN DIEKEN: Why don't we project this up here? What do you think?
JIM ALLCHIN: OK.
DARRYN DIEKEN: Yeah, how would you like to have one of these in your home? I would.
JIM ALLCHIN: That's why the power went down yesterday.
DARRYN DIEKEN: That's right. (Laughter.)
So what you're going to see in here is we're running that same application running on the Windows Media Center Edition. So let me navigate down to more programs and you'll see the Netflix application on the menu there. So I'll click on that and launch the application.
You'll notice it's a very similar user experience. Across the top I have my recommended list, I could scroll over to the top 100 again, I can scroll through all the list of videos down here. I'm going to go back to my recommended list.
There are a couple things that developers need to think about when they're targeting the Media Center. First of all, they need to be able to use a remote control to navigate, which means all the menu functions need to be simply available using up, down, left, right, and the okay button. The input device is automatically handled by the Windows Presentation Foundation, so developers just need to write a little bit of code to make those menu items and functions easily accessible.
The second thing is everything needs to be very big and bold, because you'll be sitting in your recliner watching a video, and you're ten feet away or so, you want those icons and the menus and everything to be big and bold. With Windows Presentation Foundation you're able to scale that user interface up very easily with just a small amount of code to optimize it for the Media Center.
And finally, you want the focus to be in the middle so the user can easily see what they need to do in order to work with that application.
So using Windows Presentation Foundation it's very easy to create this. The other thing, Netflix is a media application, so obviously you want to watch a video on here. So let's watch a trailer of this particular video here. You'll see inline in the application I'm able to simply get that video, that live trailer within the context of my application and see it, so Netflix can control the branding and everything of their apps.
So let's close this out and we have one last thing to show, Jim, and that is I talked about devices a little bit.
And so here we have a very cool new (IM8 Jastar ?) device. It's a 500-megahertz processor, has two cameras, it has the integrated keyboard, the screen swivels around, I can use the stylus to navigate it, a very cool device.
And so we've created a version of Netflix, in this case using the .NET Compact Framework, which is available today. In the future you'll be able to use the new WPFE technology to make it even richer to run on this particular device.
So let's take a quick look here, let's pull out the stylus. So in this case I have the same application, I'm able to navigate back and forth with it, and I can click on this details page and see the same details that were available in all these other experiences. And I'm even able to view a rich preview directly inline in that particular application, it's being streamed down over the Internet.
JIM ALLCHIN: Is this cool or what? (Applause.)
DARRYN DIEKEN: Not bad, huh?
There is one last thing before I leave that I think is the most cool thing about this entire demo. The folks I talked about from Resonate, about a month ago we approached them and said, hey, we have this opportunity, we want to partner with Netflix, how would you like to help us out. With three developers and one graphic designer, they were able to create the Netflix application that runs on the PC, that runs on the Tablet, that runs on the Media Center, and that runs on this device; in one month with three developers. That's pretty incredible. (Applause.)
So with that, I'll turn it back to you.
JIM ALLCHIN: All right, thanks.
You know, when we were preparing for this conference, the team came to me and said, hey, we've got this really cool device. And they showed it to me and they said, I think a lot of people at the PDC might like this device. And they said, but it's greater than a thousand dollars retail. And they said, can you help me out.
So together with partners we're making this available for the people here at this conference in a limited supply for the low price of 149. And you can go to the comnet and get in line for the limited supply.
It's hard to do this with a straight face: But wait, there's more! (Laughter.) For the first 250 people, you can get a Plantronix 320 Bluetooth headset for this, for only -- guess how much -- 9.95. So we are trying to help you out and, of course, everything about this device is being covered in some of the sessions here, so you can go to the breakouts and learn how to program this device.
So don't rush out right now and try to get this, just connect onto comnet and they'll explain how you can go ahead and get this.
Data Challenges
I want to move to the data pillar. I said earlier this is the biggest area of innovation since the last PDC. I want to read the challenges here, because we've updated them some. How do I find and update information across different stores? How can I subscribe to the information I want? How can you use and expose the metadata scheme easily? And how can I share information with others?
Our conclusion with your feedback and some of the work that we've done is that a single store isn't enough, and that many stores are going to be the reality. We know that WinFS has innovations that not other store has or will have, but we also know that you need both the access to and updating capability to multiple stores. We are approaching this by addressing the impedance mismatch between the programming languages and the data. We also conclude that subscriptions are fundamental, along with sharing.
So what are we doing about it? First, we've created a uniform programming model called LINQ, the Language Integrated Query, that we're going to demonstrate in just a few minutes.
What we do is we're integrating data operators directly into the language and these operators work across whether it be objects, XML or relational. You can get early drops of this at this conference.
The second thing we're doing is we're continuing to invest in search and organization technology. In Windows Vista you can search locally, you can search across your intranet or you can search on the Internet. We do this across multiple data types, whether it be mail or whether it be documents of another form. And we've made search such that you can produce the metadata or so that you can use the search functionality directly in your application.
As Chris demonstrated earlier, we've built a rich RSS platform for subscriptions directly in Windows Vista. It's a single store for the applications, so it's a real platform, its not some simple app, so you can do your own RSS work, and you get all the benefits of the indexing technology that's built-in.
Now, WinFS is still under construction, you get bits on the DVDs that we're going to give out today. We would like feedback on WinFS to know if we're still on the right track. We think we are, we want that feedback.
Communications Challenges
So I want to move on to the communications pillar. How can I write one program that talks to everything? How can I manage identity and access between applications? How do I make it work across a peer-to-peer network? How do I create applications that cross trust boundaries, and how do I share reliable and secure connections? Net-net how can I write one app that's going to talk to everything in a polymorphic way? And that's what we have been doing in the Windows Communication Foundation. It's a unified communications platform for writing distributed apps.
Now, we've made great innovation based on your feedback in this particular pillar. We've greatly simplified, we've unified the concepts of transport and channels, we've integrated port into the concept of channels, we've made the system protocol agnostic. Last PDC we were talking about WS-Star only, now we've integrated REST/POX, and MSMQ.
We've also integrated peer-to-peer directly in the system, and because identification if such a big problem, we've created a new federated ID model that we call InfoCard.
Now, what are InfoCards? Now, we've been this path before, but it's done totally different this time, we've learned an awful lot. We've learned that you want open standards, you want anyone to be able to develop their own implementations so that there can be lots of different ID authorities.
So basically what we have is an abstraction layer that covers different ID providers with a common dialogue that hides those differences to users. So all you have to do is write a couple of APIs, write to a couple of APIs and you get identity independence easily.
Now, I've been talking about communications, so I want to show you a quick peer-to-peer app before we start the lab.
Now, if you've ever been in a coffee shop where you wanted to share with another person there, perhaps in a different company so there's no common infrastructure between you, or been in a conference room where you have a vendor in and you want to share, say, a PowerPoint; well, we have a new technology called People Near Me that's the basis for the communications of PeerNet that we have in Windows Vista.
So what I think I'd like to do is have Darryn Dieken come back out and help me demonstrate an experience, which we're including in Windows Vista. The main reason to show this experience isn't specifically about the experience, but to show you the type of apps that you could build yourself.
So I'm going to bring up this Windows collaboration experience, I'm going to start a new session here, I'll call it PDC.
DARRYN DIEKEN: I've already logged on, on my machine here, Jim.
JIM ALLCHIN: OK, great.
And I'm going to type a password and I'm going to start this session.
So what's happening now is it's searching, it's creating that session, and you should be getting a popup momentarily.
DARRYN DIEKEN: So what this is doing is over the peer-to-peer network sending an invitation to me, and allowing me to subscribe to that session that you just created, right?
JIM ALLCHIN: So I've got Darryl here, just found him.
DARRYN DIEKEN: Yeah, there I go.
JIM ALLCHIN: I'm going to send him an invitation.
DARRYN DIEKEN: I'm going to accept it here, I'm joining the session.
JIM ALLCHIN: So now he's accepted, I can see it on my screen.
What I'm going to do is I want to project this PowerPoint presentation, so I'm just going to drop it into the session. And as you can see, he can see it on his screen. And if I go in to project mode, he can see it. And, in fact, any of the people that are created in this PeerNet environment could see that same presentation. And I could walk through the presentation here, and he could look at it.
Do you think this presentation is right?
DARRYN DIEKEN: I don't think so. I think we need to make one quick change to this, based on some data.
JIM ALLCHIN: OK, let me send you over the PowerPoint.
DARRYN DIEKEN: OK, so peer-to-peer it's just sending that file over. I see it over here in my handout section, so I'm going to double-click on that, and it's going to open up PowerPoint in here.
We've gotten a little over budget here, so I think we need to change the title of this presentation real quick to the lab over budget plan. I'm going to save this and using the file replication services in Windows Vista it will take that file and replicate those changes back to you, so you should be able to open it back on your desktop now.
JIM ALLCHIN: And there's the over budget.
CHARLES FITZGERALD: You'll be able to see it.
JIM ALLCHIN: That's cool. Thank you very much. (Applause.)
The important thing on this particular demonstration is not the particular experience, which we haven't completed yet for Windows Vista, but it's to think about how you and your application could take advantage of it. This is very, very cool underpinnings. It uses IPV6, so it makes it through lots of NAPS and firewalls. It uses the file replication capability and it creates a PeerNet. Even if you don't have an infrastructure node in wireless, it will automatically put the system in an ad hoc mode, and you can just use it through some simple APIs.
OK, how about some code? I've asked four architects to come out, and during the next 40 minutes walk through each of the pillars. So I've got Don, Anders, Scott and Chris. They're going to touch on all these different areas that I've been talking about in Vista.
Don? (Applause.)
DON BOX: Thank you, Jim! Thank you, Jim. I've got an itch, I've got to write some code, I've got ants in my pants and I need to code.
So just to tell you quickly what we're going to do, on the data front we're going to look at having a single programming model for query and transformation over any kind of information source; we call that LINQ. On the communications front we're going to see that we've got a single programming model for hooking software and applications, be they Web applications, enterprise applications, any kind of application; we call that Indigo. And on the presentation side we're going to see a programming model that spans all kinds of devices and embraces, in fact celebrates the distinction between the programmer and the designer/artiste.
So without further ado, I'd like to bring out my good friend Anders Hejlsberg to tell you about LINQ. (Cheers, applause.)
ANDERS HEJLSBERG: Thank you, Don. Thank you.
OK, before I get started, I want to ask you one question. In your application, how many of you access data? (Laughter.) I know, it's sort of a stupid question, but what I take away from that, seeing your hands and that laughter is that when you're writing your data applications in C# or VB, you're not just having to master C# and VB, you're having to also probably master SQL and master the APIs that bind these two world together.
And this isn't always the easiest of programming experiences. In fact, I often have, all too often have programmers tell me that they feel more like plumbers than programmers when writing data intensive apps.
Now, what we're doing here with Project LINQ is that we are taking the concepts of query, step operations, and transformation, and we're making them first class in the .NET platform and in the .NET programming languages. Rich query that you could previously only write in SQL or in XQuery, you can now write in C# or VB, going against any kind of data source, be it objects, relational or XML. (Applause.)
Thank you.
Don has on his box here installed a tech preview of the LINQ project that we are handing out on DVDs after this keynote session, so you can go install and experiment with it yourself, and we're going to start some code here and write a little project.
We're going to try first to query some in-memory data. The ground rule with LINQ really is that you can query any .NET array or collection. If it implements I numerable of T, you can query it. In fact, if you can "for each" it, you can query it.
So we're going to try and just query some in-memory array. So we're going to include system.diagnostic, and then we're going to call process.getprocesses to get ourselves an array of processes. And then we're going to write a query, so we're going to say "from P in process.getprocesses", and let's say we were only interested in those processes that have a working set of more than 4 megabytes, so we're going to say "where P.workingset is greater than 4 megabytes" and from that -- well, Don is not very good with hex here. And from that we're going to select out the process name and the working set.
DON BOX: So we don't have to get all of that information, we just pull out the piece we want.
ANDERS HEJLSBERG: Just the pieces that we're interested in.
Now, in this query you're seeing a whole host of new language features that I'm not going to get into right now, but that we will cover in the talks that come later.
Now, the result of this query is itself something that can be enumerated, so let's write a "For Each" loop and see what the result of the query is. So for each item in the query, let's console.writeline, and Don will do a bit of formatting code here.
DON BOX: Would you like to help me with this, since you are the only person on the planet who knows all the formatting codes?
ANDERS HEJLSBERG: I'll do it. Zero, minus 30.
DON BOX: Yes, very intuitive.
ANDERS HEJLSBERG: It's very intuitive. (Laughter.)
DON BOX: Do you put this on your resume, Anders?
ANDERS HEJLSBERG: And then one comma 10.
DON BOX: But not minus 10.
ANDERS HEJLSBERG: Colon.
DON BOX: Colon? NN0?
ANDERS HEJLSBERG: N0, let's do N0. How many of you know what that is?
DON BOX: It will be on the test, please remember it.
ANDERS HEJLSBERG: And then we're going to print out then process name and the working set.
So let's try and run, and here we see the result of our query. (Applause.) Thank you. Processes, working set.
Now, one thing we notice here is that this stuff isn't ordered by anything, so why don't we add an order by clause to our query. There we go. No, no, order by. And let's order it by P.workingset descending, so let's get all of the memory hogs first, and let's try and run the query now.
DON BOX: And, you know, I'm curious, Anders, what if I scroll to the top of the list will I find here? (Laughter.) Emacs would have been even higher. (Laughter.)
ANDERS HEJLSBERG: Moving right along, LINQ isn't just about querying in-memory data, it is also about querying relational and XML data. In fact, as part of LINQ, we're introducing a new API called (DLINQ ?) that gives you Language Integrated Query for relational data and allows you to map tables and databases onto classes and objects in C# and VB.
The first thing we're going to do is just go to SQL analyzer and look at a little table that we have in a database here. We have a table called process description that contains two columns, process name and description. So basically we have a bunch of descriptions, keyed by their process name.
Now, in order to access this data in our program, we're going to declare two classes that map the data into the object world. The first class we're going to write is a process description class that has two fields, process name and description. These could have been properties. And then we have some CLR metadata that maps these fields onto columns in the database and finally we have a piece of metadata that maps the class itself onto a table name in the database.
Now, the second class we write is a class that represents the database itself. It's a strongly typed representation of the database, and in here we simply declare one instance variable representing the process descriptions table and then an attribute that maps it onto the database.
And that's all we need to do. Now we can access the database as objects. So we're going to create a new instance of the process description database, and this really think of it as a strongly typed connection to the database.
And let's just see that we actually have the data, so let's write a "For Each" loop where we just say "for each item in DB.processdescription" and you see we get statement completion on our database here, everything, and now we can just "console.writeline" the contents of the table, and "item.description", so write out process name and description.
And let's try and run that. Let's scroll back. So at the top here -- well, why don't you put a read line in there just so we can see what's what here? There we go, just pause right there.
So here we see this very same table, but now we're accessing it directly in C#.
Now, the next thing we're going to do now is we're going to actually join these two pieces of information, the in-memory array, and the data from the database. So we're going to modify our original query, and we're going to select out one additional column called description, and the value of that column or that field rather we want to be a nested query. So we're going to say "from D in process descriptions", our database table, "where D.processname = P.processname," so join on the process name, "select out the description". And then finally, we're going to call first our default on the result of this query to just get the first that we get back or null if there wasn't any entry.
And let's run it and see what happens. Oops, one thing, of course, we need to do is we need to go modify our "console.writeline" and also get rid of the read line. I got you a little worried there.
DON BOX: My heart rate was up a little bit, thank you, Anders, yes.
ANDERS HEJLSBERG: There we go. Let's try and run it. And here we now see a join of these two, of in-memory data with data in the database. (Applause.) Thank you.
DON BOX: So to paraphrase our group vice president, now what would you pay? (Laughter.)
ANDERS HEJLSBERG: Well, actually let's just pause a little bit and think about what you would have had to write here previously. You would have had to open a connection to the database, new up a SQL command, new up parameter objects, put all of that stuff together, right, and SQL query and strings without IntelliSense or statement completion, pray that it works, do I have it shipped over, get a data reader back, sit and iterate in a loop and manually cast all the data to the appropriate data type. So there is a lot of stuff that just is plumbing work that we're now doing for you automatically.
Now, the last thing I want to show here is how we can turn the result of this query into XML.
Part of the LINQ project is an API called XLINQ that effectively is a Language Integrated Query enabled object model for XML. Think of it as a modernized DOM.
Now, a key cool feature of XLINQ is the ability to do functional construction of XML, that's what we'll do here. So we're going to new up an XML element, and actually what better format to put this XML into than RSS form. So we're going to new an element with the tag name RSS. And one of the things we can do with XLINQ is that we can specify the contents of the XML element as part of its construction. So here we're adding an attribute to the element and then we're adding a nested element called title with the text process description. And then actually we need channel in here.
DON BOX: Yes, we do. Thank you.
ANDERS HEJLSBERG: Don will add a channel up there.
DON BOX: Good to know I've got schema cop in the room.
ANDERS HEJLSBERG: Channel and then open. There we go.
And then inside channel we want title and process description.
And now what we want now in our document is a list of all of the items in the query, so we're going to write a nested query here from P in query, the query we have just above, select new X element. So for each item in the query create a new X element called item containing sub element title containing the process name and description containing the description.
Now let's close off some parens.
DON BOX: I want that to look beautiful.
ANDERS HEJLSBERG: There you go, one more.
Okay, and now the final thing we're just going to do is console.writeline the XML, why shouldn't we just be able to in the two string method get XML out of an XML object. There we go.
Let's run it and let's see what happens. Boom, now we have the result of our query as XML. (Applause.) Let's go to the top and you can see that we basically get RSS version 2.0 channel, title, and then a bunch of items that were converted into XML.
So that's a very, very quick look at the LINQ project. If this sort of got your curiosity up, go to our track lounge, get a whitepaper on the LINQ project, and also come see my in depth talk on this tomorrow at 1:45.
Thank you very much. (Applause.)
DON BOX: Thank you, Anders.
What do you think? It's OK? (Applause.) Keep going? All right.
So we've seen LINQ. Next I'm going to do a talk about how we're going to take this program, which just runs inside of a single little app domain, and make it span the Internet, make it span the enterprise, make it accessible to all kinds of other programs.
To help me out on this, I'd like to bring back my partner in crime, Mr. Chris Anderson. Please give it up for Chris. (Applause.) Thank you, Chris.
So "Indigo" at its heart is a messaging system, and so what we're going to do is take the code that we just wrote, and instead of spewing the XML to the console, what we're going to do is take that XML and turn it into a message.
So to prepare for doing this, what I'd like you to do is go up and define a new class for the top level of the name space, call it public class lab service, and simply have one method, have it return and have it be public, have it return a message, message is a type that's built into Indigo, and call it handle message, and have it take one parameter, which is a message.
Now, for this to compile, we're going to need to bring in the DLLs that we use in "Indigo." So could you go to add reference and bring in system.servicemodel, system.runtime.serialization, which are the two DLLs that we use to build "Indigo," and we're also going to bring in a couple of glue DLLs, because we're going to show how this stuff integrates with both LINQ and with "Atlas." That stuff obviously isn't in the bits that you've got, it's some experimental stuff that maybe we'll see crash on the screen again.
So let's go ahead and just add ref those things. This should now if you add system.servicemodel to the name space, which I'm sure you've already done.
CHRIS ANDERSON: Using the great refactoring support in Visual Studio 2005.
DON BOX: Good product pitch, good for you, Chris. You've got a career in marketing coming, I can feel it, baby.
CHRIS ANDERSON: I'm working on it.
DON BOX: So what I'd like you to do, Chris, is go grab main, take main, remove the entire contents of main from main and keep it in some place -- here's your creative window, paste it wherever you would like, Chris.
CHRIS ANDERSON: Okay.
DON BOX: Good choice. Now, that console.writeline, that's so 1980s. We've got to get rid of that thing.
CHRIS ANDERSON: Gone.
DON BOX: What we're going to do now is we've got some glue code to integrate the XLINQ stuff with Indigo. I'd like you to go grab those glue files out of the snippets directory. We will be shipping these on our blogs, whatever, you can see the code, there's not much there, we just haven't done all the integration in the products yet.
So what I'd like you to do now is cause up a new message. So to do that, I'd like you to say "return xmessage.newmessage". Messages have two things, they have an action, which is just the S identifier, pick a URI, please go nuts. That's a wonderful one.
CHRIS ANDERSON: I'm creative today.
DON BOX: Secondly, put the XML in the message. Lovely. We now have an Indigo service. We're done writing the service code.
CHRIS ANDERSON: Not very exciting yet. It's very dull.
DON BOX: What we need to do is, "Indigo," at its heart, is all about making two pieces of software work together. And the way we reason about how software work together is using these things called contracts. So, we need to write a contract really quickly to describe that thing we just did. So, if we can define what we call on the "Indigo" team the universal contract, this would be you're a very fast typist, Chris. You've got all kinds of skills. So, this contract basically says, I take any kind of message, and I'll give you any kind of message. So, this describes the universe of all possible applications. Why don't you go ahead and implement that sucker, and now we're done. Our code is complete. We've got a piece of CLR code that now can talk any kind of protocol, integrate with any kind of program.
To make it exposed to the world, we need some end points. The way get end points is by creating a server, go down into main.
CHRIS ANDERSON: In main.
DON BOX: What I would like to do is bring in the hosting code that allow us to take a CLR class that implements a service contract, and spew it out to the world. In this case, what we're going to do is say, make a service host, have it assigned to a CLR pipe, and we're going to have an end point with a basic HTTP binding, this is 1.1 HTTP, very vanilla, put an address that you feel comfortable with, and we're going to bind it to the contract. In Indigo, end points are addresses, which are the names of the end points; bindings, which are details of how the protocol works; and contracts which describe the interaction. If you've got an end point, why don't you put a little writeline.readline action at the bottom. And start your engines, so to speak.
Now, at the last PDC, we showed you an Indigo that embraced XML, it embraced SOAP, and it used HTTP. And one thing we heard from the community since the last time we were here is, I need more access to the HTTPness of my applications. So, to get the HTTP integrated into my program, we've done some work in the basic HTTP transport channel to allow full access to the richness of HTTP. So, if you're writing HTTP GAD, and you want to have URLs, that works. If you want to write a full RESTifarian app, you're in there.
So, what we're going to do now is surf to our page using the browser, and then a little bit of a problem. A little bit of a problem, if something happens, what if you do a view source. Ah, so we supported HTTP GAD out of the box, that worked great. The problem is, the basic assumption of SOAP is, we don't know what you're going to do, so we have this accessibility model with headers. If all you're doing is simple data transfer over HTTP, and you don't want the SOAP, we have the ability to rinse it off prior to leaving the app domain.
So, Chris, would you go back to the program, and where it says "basic HTTP binding" will you change it to "pox binding". Pox binding is a little piece of code written by my good friend Doug Purdy that basically rinses the SOAP as the messages go out, and lathers them up as they come back in. So, a reality test, open the program model.
Let's go ahead and run that thing now, and now if you hit refresh. Good job, it's beautiful. Can you do that well in "Avalon"?
CHRIS ANDERSON: We're working on it.
DON BOX: I thought so. That's great, I love that. Cool.
So, we saw that writing an "Indigo" service is pretty simple. You write a piece of CLR code, you write a contract, you wire it up, and you're done.
Now, I know that we're going to do some integration with my code and this code that we've been writing later on with both you and Mr. Guthrie.
CHRIS ANDERSON: Yes.
DON BOX: That's because we're architects. We did very contract first development, so we sat around and wrote a WSDL file, and got all
CHRIS ANDERSON: Have a few drinks, you write WSDL.
DON BOX: So, why don't you bring in, we took the WSDL file we wrote and ran the service user tool on it, to bring in the contract definition into a way we can program with. And there's a contract called LAP contract. What I'd like you to do is go up to our service and implement it.
So, we've got that contract in the list. There's a couple of methods which are variations on the ones we've written already. The difference is, instead of producing unbounded XML, these adhere to a stricter schema, and we actually use the data contract system to make the schema integrate with our programming language. Let's go ahead and inject. We have get processes, which just does a query, exceedingly similar to the one Anderson and I just worked on. And then we have a second function called match processes, which does the same thing, except it takes a regular expression as an argument, and narrows the scope of the processes I'm getting to only the ones whose names match the pattern.
So, we've got our contract. Now, I know Scott is just itching to come out here, and show us how to write a client program. But, here's the issue, Scott's program, his programs run in the browser. Scott works on ASP.NET, he works on "Atlas." A Web browser is a great thin, but we want to be able to integrate them with this service-oriented world that we live in with "Indigo." We have a little problem, you know that term you hear "Ajax," Asynchronous JavaScript XML. Well, it's a little bit of a misnomer. One of the problems we've got is not every Web browser has XML support. Internet Explorer does, other browsers do, other browsers don't. The HTML JavaScript community that were so inspired at PDC '97 when we showed then IE 4 and IE 5, and now have embraced that lifestyle, have coined this new protocol called JSON, which is the JavaScript Object Notation, it's an alternative representation that many of the AJAX apps use to communicate between the relatively impoverished environments of the browser and JavaScript, and the backend which might be anything.
So, it turns out that my good friend Steve Maine wrote this lovely binding for us, which is part of our machine here, which allows us to serve up end points using the JSON protocol. What I would like to do is go to the app.config file. Most of the time, by the way, let me be clear, we programmatically added the end point just to get ourselves bootstrapped. Indigo assumes that you can do whatever you want, but we assume the default usage is the contract and the code are in the .DLL. The end-point addressing and protocol information is in the configuration system. We have to demonstrate that principle here by bringing in a service configuration element.
What Chris is going to do is bring in a service configuration element, that's not it, Chris.
CHRIS ANDERSON: Uh-oh.
DON BOX: That's the one thing I hate hearing in these, much better. Would you like to change that to the last service.
CHRIS ANDERSON: It's much better.
DON BOX: We've got LAP Service, we've got the LAP Contract, this is that WSDL-based contract that we talked about before, the binding is the JSON binding that Steve Main wrote for us. We've got an address, which is demo.JSON. That's it. So, we've now got a fully enabled DHTML JavaScript extravaganza. I can't think of anyone more qualified to write the client for this thing than our good friend Scott Guthrie. Come on out. (applause.)
SCOTT GUTHRIE: What we've shown here is how you can go ahead and query for process information using LINQ, and expose it to the Web using the richness of "Indigo" services. What Chris and I are going to go ahead and do over the next two segments is show how you can build a rich UI presentation on top of it. First, using a broad-reach Web client, using new "Atlas" technology, and then, targeting rich Windows clients using Avalon.
So, as Jim mentioned earlier, Atlas is a new framework that we're releasing that enables developers to more easily build richer "Ajax"-enabled Web apps, specifically Atlas delivers two kind of core components. One is a rich client side library of JavaScripts that you can take advantage of, and provides kind of a rich object-oriented environment to do Comm and "Ajax" and rich UI browser tasks. The second piece then is a set of server side technologies that are integrated within ASP.NET, and kind of basically encapsulates and integrates it in a way that you feels familiar for an ASP.NET developer today.
What we're going to go ahead and do in this project is just go ahead and import an existing Web project into the solution we've been working on, which is a Web project. It's going to be taking advantage of the new "Atlas" project types that we actually just released on the Web an hour and- a half ago. You can download and use on top of the S 2005 Beta 2 today. Specifically, you'll see within this project, we have a number of script libraries, which are the Atlas script libraries. And then we also have a page called default asp.x, which we will be working on as we build up our UI.
Right now, this default ASP.x has no server side code at all, and it actually also has no client side code. It's a pure HTML page.
DON BOX: So, it has no code.
SCOTT GUTHRIE: It has no code. Code free. Basically, you can see here we have a couple of UI elements. We have our text box for searching processes, a button, and then a SAN. So, what we're going to go ahead and do is call Don's Web service, we'll retrieve data and dynamically populate the page. So, to do that, the first thing we're going to do is actually add the applet core library into our page, and we're just going to do that using a standard script reference to Atlas Core. This allows us to have our core base class library, if you will, in JavaScript for "Atlas." And we're going to go ahead and point at the Web service that Don just built using "Indigo." And, again, we're also just going to go ahead and point at it using a script reference. This is logically like an add Web reference today, except in this case here the process is going to be in JavaScript.
Once we have that imported, we're going to write some client-side script today. Specifically we're going to write a button click handler, that will fire whenever someone on the client hits the button. Then what we want go to ahead and do inside this button event handler is just get access to the text box value, using the standard document element by ID JavaScript API, and then call into the "Indigo" service proxy that we just imported at the top of our page. The way we'll do that is by calling "LAPservice.matchprocess, the same method that's on the server. Passing the parameter and then we'll pass in one extra thing, which is a pointer to a function to call back to when the data populates on the page. This way we're not synchronously blocking the browser thread.
Within this completion method then we're just going to very simply get access to the span element that we have a little bit lower on our page, where we're going to put the output. Then we're just going to go ahead and use the results that returned from the server, loop over them, and manually build out the process list that we'll go ahead and slide into our page.
One of the things you'll notice as Chris is typing is he'll be able to actually use a kind of a nice JavaScript object model that mirrors what Don was using on the server to describe process info. I don't need to manually parse anything or pull anything out. Instead he can just go ahead and say, result set get length, and then results of index name.
DON BOX: Someone says we have an error.
CHRIS ANDERSON: I think we have a test position open for you, sir.
SCOTT GUTHRIE: So do the job and tell us what we did.
DON BOX: No. This is direct code, if you use a Web-service proxy today and you hit it you get back this big document full of WSDL with all these angle brackets and wackiness. And here we passed an argument, we said slash J S and it's going to return the JavaScript proxy automatically. It is the JavaScript equivalent of question mark WSDL. It is exactly that.
SCOTT GUTHRIE: Thank you for noticing and scaring us.
DON BOX: This is an interactive session.
SCOTT GUTHRIE: We're doing it on the fly. So the last thing we're going to do is let's wire up our event handler to the button, and fingers crossed and we'll go ahead and run this now. Hit it with the browser, we're going to go ahead and again, we're not doing any post backs, we're not refreshing the entire page, instead what we basically have is a simple process explorer, type in a little filter query, say all processes that start, have an A in it. Hit search and we get back a list of processes from John's "Indigo" service on the client. (Applause.)
Pretty easy stuff, let's go ahead and make this UI a little bit richer. Specifically what I want to go ahead and do is manipulate the UI slightly, so that instead of having just kind of a boring list that we're going to manually populate with processes on the left we're going to make it a little bit prettier and templatize it. And I specifically want to implement it to have a master details relationship, so I can go ahead and drag and drop processes on the left hand side, use drag and drop as a metaphor to pull in details on the right hand side about those things. Do it entirely on the client, do all the data binding on the client, not do any post backs to the server.
If you're doing that today, chances are that's going to take you a long time to get right, and specifically test and make sure it works in all browsers. The nice thing with "Atlas" is we make it really easy. So what we're going to go ahead and do to actually help with that is we're going to go ahead and import our new control here that is built on top of ASMS 2.0. Then encapsulate kind of the "Atlas" runtime. Specifically, we're going to add the script manager and all this does is, it's going to keep track of which JavaScripts to pull in, and page in on the fly, that way you're not downloading all the client libraries that you need to, you only download the ones you need for your particular page. We're going to go ahead and replace this whole span and the basic span at the bottom of the page with two new controls that we're going to go ahead and add right now.
Specifically these controls are the "Atlas" list view control, and the "Atlas" item view control. What the list view control does, there's a template, just right here, though, we can figure out the basic functionality. It provides kind of a core way that we can go ahead and provide kind of a data binding on the client list, that's templated, and the item view lets me pull up details about an individual item. We use CSS for styling all styles, so we're deeply embracing CSS as kind of the core way that we customize UI and then integrate nicely with designers.
You can see right here we've also then added a behavior, which is going to be a drag/drop behavior. So we're saying the list view is exposing a data source called process, and the item supports the drag/drop of process data into it. That's all we need to do.
Last, but not least, we'll then just go ahead and write some code to populate this. We're going to do this code on this client. So we're going to write all these server controls now, expose client side object models you can take advantage of. This results control here, we're just going to access this .control property and then set the data on it, and pass in the results that we pulled in earlier.
With that, we go ahead and run our page, search again. You'll notice have a slightly nicer presentation, drag and drop, pull up the details. (Applause.) So obviously this is a very basic UI we're trying to show, typing everything in front of you. Again, because it's based on CSS, it's really easy to go ahead and customize it and make it much richer.
There's one thing missing, I think, from this app, which is any true, obviously, "Ajax" app has to have a map integrated into it. When Don and Chris' original idea was, hey, we're going to show a working set and thread counts, frankly, I couldn't come up with any realistic scenario how you'd integrate a map on first thought. On second thought I couldn't come up with a better on either.
So instead, what we decided was, let's be a little creative. We're going to go ahead and add into this item view one extra control that's built into "Atlas" called the Virtual Earth control. So for this scenario we're just going to map the longitude and latitude of the running processes on our server. It's a common thing, why not. So now we go ahead, search for processes, where is that last server running, at the PDC. It's not terribly useful, but it's pretty darned cool.
So we've shown everything so far now with IE 7. So obviously, as Chris covered in Bill's talk, we've had a lot of investments in IE 7, it's a much, much richer browser, I think a really great browser experience. Obviously, if you're building a Web app you want it to be available to any type of browser client.
DON BOX: Including IE 6?
SCOTT GUTHRIE: IE 6 included.
DON BOX: IE 5.5?
SCOTT GUTHRIE: We often say that's cross browser at Microsoft, and it turns out most people don't really believe that. So we're going to show it with a different machine. We're actually going to pull up a Macintosh. This is going to be running using Safari, the built-in browser within the Mac. You'll notice it has the home page that's on the HMS site today, where you can actually download the Atlas project library and run it on top of ES 2005 and ES 2.0. What we're going to do is actually just browse through the page that we built just a little bit earlier. Notice it looks basically the same. Let's search for processes of type A. It looks kind of the same. Let's drag/drop, there's the Virtual Earth, let's move the Virtual Earth around just to show it's real. So basically you're seeing the exact same app, the exact same code, running against our server, it works the same. (Applause.)
So "Atlas" enables you to build cross-browser apps very easily, as you've just seen. "Atlas" is also going to go ahead and enable you as a developer to take further advantage of some of the features that are in Windows and the new versions of IE. So specifically you can build new Windows sidebar components using "Atlas," you'll also be able to drag and drop "Atlas" controls from within the browser onto the sidebar and dock it persistently within the shell. Last but not least we're looking at exposing some of the built-in Windows data services, things like a local store for offline data, as well as things like My Photos, and My Calendar, contacts, and provide them in a secure way that browser clients can take advantage of using "Atlas," nicely integrate your overall experience and give a really great developer story.
So I hope you enjoy "Atlas."
CHRIS ANDERSON: Excellent, all right. (Applause.) So I love the Web, but I also love "Avalon." I think that is its real name, whatever the slide may show you. "Avalon." So I'd like to build a smart client, built on top of "Avalon." What we need to do first is unlike the simple Web platform that we're able to target now, we understand in the client a richer set of protocols. So what we can do is, in our server side if you would, using that great config system, add in a couple more end points, and we're just going to have some bindings so that we can talk natively with the WS-* protocols, which will let us do better security and let us do some other features like that.
So we've got those plugged in, that looks great. Let's build and run that, and we're done with the server now. So you get to write no more server code, Don. The rest of this stuff is going to be UI and glitzy.
DON BOX: Which is where I excel.
CHRIS ANDERSON: Excellent. So it will be good. A new instance of Visual Studio, we're going to create a project. One of the template types in the link preview is a Win FX application, which defaults all these "Avalon"-"Indigo" bits in, so if you could call that last client, and let's create the new project.
Wonderful, it looks beautiful. So what we want to do is play around with various UI apps, and open some windows. What we want to do is take that same contract we define on the server and use it on the client. So I'm going to drag in contract.vs, which is the exact same contract we had in the initial, on the server side, and an app config. So just like on the server, how we can separate out the protocol and everything else about the binding to an Indigo service, on the client we can do the exact same thing.
So if you would please import that, excellent. It's a new word. It's OK. Now we want to use this function. Right. So let's create a connection to the server. So connections to the server, you start by creating a channel, that channel is defined by the contract, so you're going to create a LAP (?) contract, we're going to say "channel.create," give it the type of the contract. Then we're going to give it the name we specify in the app config. This is a level of indirection, so that we can do the change under the covers about protocols or security models, or anything else. So we've got that, we can do a result, we can do a simple query against the channel and call the get processes method.
We've got data.
DON BOX: How do we show it, Chris?
CHRIS ANDERSON: I think we should create some XAML. So if you would, please, open the Window1.xaml, and we'll start by creating a simple list box. Excellent. And set the name equal to something. So what we want to do here is now in the code behind simply set the item store of the processes to be the data we got back from the query. So Avalon natively supports data binding, all of the controls have either item stores or content properties, so you can easily inject data from anything.
One of the things we got feedback on was the "Avalon"-"Indigo" connection wasn't as good as it should have been. So this is an example where the default processes are natively bindable now.
So if you would please, run our beautiful application. Rich visualization, I see it. We have process data. Not very differentiated, but before we move on, I'm a little concerned. We're now exposing our process information to anybody out in the world who knows our IP address, and what I'd like to do is have a better security model on here.
DON BOX: This is actually secure already.
CHRIS ANDERSON: That's true, it's the WS HTTP binding, so it's actually doing Curb Off process. NTLM and Curb Off aren't always the most Internet friendly protocols, so what we want to do is use the new info card system that Jim talked about, and actually just change that to be the binding we're going to use. So without changing any client code, we're going to rewire that end point to point to the info card. And this is going to give us one click security with federated name space identity. It works over the Internet, it works over different companies Intranet, goes through firewalls, does all this stuff. You can choose to trust whoever you want.
So we get the UI that pops up, we choose the info card we want, we hit submit. Now this has send the query back with the additional credentials that were specified in the info card, and that could have come from any provider.
So it's still we still haven't solved the visualization problem. And for now let's go back to the NTLM so we don't have to deal with the UI popping up, we can switch those back. What I'd like to do is create a template, create some UI definitions for processes. So if you would, please, create a listbox.item template. And inside there we're going to create a data template which allows us to specify how we want to visualize something. We're going to put a simple text block in it, bind it to the name attributes and then run this.
So again, still not quite exciting enough. We're not quite there, we'll get there. You have to appreciate what this is doing, right. We know on the server it's taking CLR objects, metadata from a database, remoting it over Indigo, visualizing here, and doing data binding natively in Avalon, that's a pretty good experience so far. But, we need that rich visualization, the developer-designer split that we talked about earlier.
So if you would, please, go to myapp.xaml, and we're going to replace this empty file with a pre-canned one, a graphic designer, a PM on a team spent a good amount of time creating a nice look for this app. And what we want to do is the last thing we do is change the item template, it could be bound to the resource that he gave us, and he said it's going to be called Process View. So this really allows you could have a designer continually work on that file without affecting your application, at all, so if we run we're going to see a very different look for the app.
DON BOX: You didn't need to change the code?
CHRIS ANDERSON: Correct, no code changes at all. (Applause.) So we can see a couple of things here. First off, we have dials and toolbars, we're able to do nice effects like blurs, images, gradients, all the stuff you'd expect out of a nice 2-D graphics platform. But, this is really all kind of UI based. It looks like a list box still, all these things. What we want to do is we want to go to next level, we want to do something that actually integrates some documents in. Before we do that, however, we need a little space on our screen.
One of the pieces of feedback we got last PDC, very loudly, very repeatedly by many people, was that we didn't have enough of the basic controls in the box. So I'm happy to say that in the PDC build that you're getting there is going to be list view control, tree view control, month-calendar control, all those high value controls are in the box.
In addition, we've added a grid clear which will allow us to dynamically resize the UI, which is another feature that people really requested a lot. So if you would please put a content control in here, this is going to host our documents that we're going to create. Give it a name and set the grid column. Cool.
DON BOX: Now, what do I do, Chris?
CHRIS ANDERSON: So instead of creating the data binding in markup what we want to do is actually dynamically generate the code. So it's a little bit of code, so if you would please paste the snippet, if you look really all this is doing is it takes the result of that query, that process data list, and it does some link queries against it, creating a set of paragraphs and runs, and bolds, it's programmatic manipulation of a document tree. It's fairly straightforward.
What we want to do, though, is take that document we just created and make it be the contents of the view. So if you would, call up create content, give it the result, it looks very nice. So now if you would run.
DON BOX: It's thinking, and working.
CHRIS ANDERSON: So what we see is a nice document view, and if you would, Don, why don't you it's selectable, can you page around in it, you can scroll, you can see that Avalon automatically figures out the right number of columns to display for the document, provides a paginated view. All this gives you a rich document platform that's directly integrated with the same programming model you had for all the UI elements, bound to the same data, right?
DON BOX: Love it.
CHRIS ANDERSON: It's great. (Applause.) Thank you.
You know, there's a saying, right, all of the "Ajax" apps must have a map and all "Avalon" apps must some how incorporate 3-D. So again, we have a task of how do we integrate 3-D into a great app about mapping your processes. And so we have a little version here, which if you'd launch that please, Don.
We see on the right hand side, we have a third pane, if you could maximize the window to give us real estate. We can see that we can fly around the 3-D view on the right. It's not looking very interesting yet. That's better. If you would please, resize the window on the left you can see that's all live data, you can select it, you can scrub it. You know, that's actually a full 3-D object, so if you would right click on the sign on the right there's actually something under there. So if you'd fly down a little bit closer you can see there's something there. What is it? I don't know. Would you please right click and let it get out of the way. They have Virtual Earth in "Atlas," we have real Earth on "Avalon."
CHRIS ANDERSON: Excellent. And it's wonderful to see all those Win 32 processes enveloping the globe.
DON BOX: Win 32 does encompass the world.
CHRIS ANDERSON: So really what we've seen is on the presentation side a program model that spans all the different areas of execution you need, with "Atlas" and "Avalon." We have a unified programming model for dealing with communication, that lets the software talk with "Indigo." And in the data side we have a great new integrated query model with LINQ.
Two years ago we were on the stage and we announced a bunch of new technologies, "Avalon" and "Indigo "included, and you guys talked to us. You told us what you didn't like, what you liked, and we really listened. We hope that the stuff you see here, the pods, the control sets, show that we have been listening to what you guys have been saying. We've announced some new technologies today with LINQ and with "Atlas," and if you keep talking, we'll keep listening.
Thank you very much.
JIM ALLCHIN: What do you think about the breadth? Incredible, huh? (Applause.)
Well, for the last 40 minutes you've been seeing these guys write some code. How about looking at a real app, a sample app? You know, in order to test out a platform with this much breadth, I really wasn't sure. I asked you guys to dogfood but it's better if we dogfood. So a few months ago, I asked a team inside of Microsoft to take this platform and to write a sample app, and that's what they've been doing, create a sample app, test the platform so you wouldn't have to.
I'd like to bring Hillel Cooperman out, so he can talk to you about the work that they've done in that team with the sample app. Hillel? (Applause.)
HILLEL COOPERMAN: Thanks, Jim.
So when you gave us this job, which we appreciated, we looked at the platform, we said, wow, "Avalon," a lot of cool UI, and we said "Indigo" or Windows Communication Foundation, we can share data between PCs, and we said, well, what would be a great scenario to actually try and make happen in a real application, a real sample app using that platform, and we thought we would let you create visualizations of your photos and share them with other people over the Internet.
JIM ALLCHIN: A good sample app.
HILLEL COOPERMAN: All right, so let's take a look.
So we came up with a little codename for it, we call it "Max." Here it is. So this is again 100-percent managed, all the UI you're seeing is built in WPF, and, in fact, you'll notice since the app is relatively simple, all it does is let you make lists of photos, visualize them and share them over the Internet. We wanted to put something else on our homepage, and so we had a little extra time, and because of the productivity we get from the platform we're able to put a whole kind of newsreader hardwired to our blog, our team blog right there on the homepage. So that's kind of cool and a good way to communicate with people who are trying out this app.
So let's create a share a list of photos. So from the user perspective you're getting access to the last seven folders that got updated or created in My Pictures. But what's cool about what's happening under the covers is we're using Avalon's composition features to actually nest views. So you see one, two, three, four, five, six, seven thumbnail views, all embedded in one big thumbnail view, and you can see they're even rendering the images at different sizes, and again that's thanks to "Avalon."
So let's click here on these images, and just so we can see them all, let's make it a little bit smaller, and now let's add some from other folders. And again you'll notice this is two instances of our thumbnail view, one here on the left and then one here on the right, which shows the actual list we're making.
And again the nice thing is we're able to really reuse the controls that we created here, because this is kind of a single purpose view where you're just adding and removing from the list, so we wanted to put those little hints on the cursor about whether when you click you'll get an add or a remove. And again that was super easy to do with the platform.
Let's add just a few more, and we'll add one more. All right.
So now we have a good 65 items in our list, and that's fine, but one of the other things that we get is it's all well and good to have all this data, and it's nice that you can make a nice thumbnail view using Avalon, but you really want to be able to make multiple views and have it be distinct from your data, you don't want to have any mixing there.
So let's actually take another view that's maybe more optimized around photos, and we call this the album view. And let's zoom it up a little bit.
So, in fact, what we've done now is laid out all the same images, we haven't touched any of the original data, but laid them out in these templates that make it kind of a nice presentation for slides. You know, it's maybe this picture doesn't really look good in that layout, so let's move it and swap it over here with this one and now you actually get the whole image. We can rearrange slides, et cetera.
And again, all this functionality is super, super easy for us to do using this great platform.
JIM ALLCHIN: Come on, don't you think this is cool? (Applause.) And it comes from the platform, so you didn't have to do all the work.
HILLEL COOPERMAN: Yeah, well, the funny thing is some people sometimes say, hey, that Whidbey stuff, that seems all well and good, but, hey, was it hard to do, and the answer, like Jim said, is no, but hey, is that something that really you want to have. And you've got to be careful to balance it, and you don't want it to be gratuitous but, in fact, a lot of this stuff really helps you with usability where you're going and dragging exactly where the thing is going to end up, and that's actually something that helps in an app, whether it's for consumers or in the enterprise, and that's a lot of feedback we've been getting from folks.
All right, so let's get a preview of what that's going to look like on the other end of the wire. So when we go to share these images over the Internet, this is what the other person is going to see.
So let's actually go back and let's name this "PDC Pictures."
So we'll hit share, and one kind of interesting thing about this screen is we want to actually give people a live view of what they were about to share so you know it's again in the upper left hand corner a mini view of what we just saw. And again it's super easy for us to reuse all those components and lay them out in multiple pieces of the UI, and just exactly the way we want.
So I'm going to type in your e-mail address right here, "Hey, check out these pictures," and I'm going to hit share. Now, this is when we move on from sort of the "Avalon" portion of our demo and give you a sense of how we're using the Windows Communication Foundation.
So if you'll step over to that machine, what's happening is we really benefit from "Indigo" in that it lets us in a really low cost way create a secure connection, a secure channel between two PCs to replicate these images over to the other PC over the Internet.
And let's make sure, Jim, you're up on this screen. I think you are.
JIM ALLCHIN: No, not yet.
HILLEL COOPERMAN: Let's see what you've got here. I think you're over on this machine. There we go.
JIM ALLCHIN: Great.
HILLEL COOPERMAN: Great. So we've gone and created that channel using Indigo with a custom data contract and a custom service contract. And, in fact, you go ahead and accept the album, there it is. I was told that if you're standing all the way over there, it will make it more realistic that the bits are traveling from one machine to another.
And, in fact, you'll notice what's happening right now is they're handshaking and making sure everything is all okay, and here come all the images replicating over.
And again this kind of communication is relatively standard today and going to become more and more standard where clients want to be able to talk to each other in a secure, authenticated way, and in a relatively lightweight way, and Indigo did a huge amount of the work for us so that we didn't have to do it.
JIM ALLCHIN: So it's almost done?
HILLEL COOPERMAN: Yeah, almost done. Nine more, eight, seven; well, you can see.
There you go. So play it back and let's verify that it was exactly what I shared with you. (Applause.)
All right, so one more aspect of the app that we made that I want to share with you. So the "Avalon" guys did a great job adding a set of 3-D APIs for us, and like ChrisAn said, you can't have an app without using 3-D, we wanted to show it in a way that hopefully you'd find super useful, so we added one more view using the 3-D APIs. And the nice thing about this is Avalon makes it so that we address our 3-D objects with the same programming model as the 2-D objects. (Applause.) And that makes it really easy to use.
So you can actually see the pictures are not only caught on 3-D, but the reflections under them are live 3-D reflections, all plotted on a 2-D background, and those little transport controls that you saw are another 2-D object composited over the 3-D objects that are composited over the 2-D objects, so again all made super easy using "Avalon."
JIM ALLCHIN: Great, thanks. (Applause.)
HILLEL COOPERMAN: So I know I'm getting the hook; two quick things. One is we're going to actually have two sessions this week, one at 1:00 today over in Room 150 on a casestudy in how we did the UI, and then on Thursday at 1:00 where we delve into the details and show you a bunch of the code. And the litmus test for this was not just to show it to you, but if it was going to be real you should be able to use it, so you can actually go download it right now at.
Thanks very much.
JIM ALLCHIN: Great. Thanks again. (Applause.)
How about a real app? This is a good sample; how about a real app?
Joe Flannery, the vice president of marketing from The North Face is here, along with Mark Belanger, CTO and co-founder of Fluid. And they're going to come out and talk about the problems that they wanted to solve and how they solved them with Windows Vista. Hey, guys?
JOE FLANNERY: Hey, how are you doing?
JIM ALLCHIN: I'm doing good.
MARK BELANGER: Hi, Jim. (Applause.) Good to see you.
JOE FLANNERY: Hi, everybody. This is new for a guy from The North Face to be at the PDC, and I'm very excited to be here to tell you a little bit about The North Face. We're the world's premier supplier of authentic and innovative outdoor products. We have products that inspire and enable our customers to never stop exploring.
The North Face was founded as an expedition brand, and to date we sponsor more expeditions than all of our competitors combined.
We've got amazing athletes like Pete Athens, the American who's summited Everest more than anyone else. Pete is an amazing guy, but when he's up on Mt. Everest, our customers can't experience what he's experiencing.
We've got athletes like Dean Karnazes, the ultra-marathon man, who's running 270 miles at a time, but he's by himself.
We also have thousands of products each season, 2,000 this season alone. Sometimes our customers have a hard time understanding the difference between a $100 tent and a $5,000 expedition tent.
So we've looked at what's been successful for marketing for us, and the Internet has been an astounding tool for us to use.
So we briefed our friends and our Web design agency Fluid to find opportunities for us to figure out how we can bring our customer and our content and bring them closer together and bring it to life. So Mark is going to show you something that's answered our prayers. Mark?
MARK BELANGER: Thanks, Joe.
So we've had the good fortune of being The North Face's agency for the past three years. In that time we've been able to use rich Internet application technologies to double the overall sales volume of The North Face's Web site. We were able to do that by showcasing all that wonderful content that Joe was just talking about via rich interaction technologies.
A little bit about Fluid: We design and engineer rich user experiences. Accordingly, we've partnered with Microsoft to take advantage of the next generation technologies within Windows Vista to realize this project. With Windows Vista we're making the media happen.
So what is this thing? It's a proof of concept C# app that's written with also XAML, that's completely data driven. Nothing you're about to see is canned.
So in my earlier demo you saw a pretty cool thing with some video playing in 2-D. How about some video in 3-D? How about three videos in 3-D? (Video segment.) So you can see this is totally dynamic. We find this pretty amazing. This kind of 3-D in video is something that's really not very easy for us to develop, especially in Web context.
And before we jump in further, first a confession. Prior to this project we were Web, not Windows, app developers, and what we found was an incredibly powerful platform, we're really pleased with the results, and we think you will be, too.
So as you can see, rich media is really deeply integrated into the Windows Presentation Foundation. You've got ClearType overlayed on video playing full screen in the background. Integrating imagery, sound, video often involves just a single line of markup. We're able to introduce these great 3D elements to showcase The North Face's products.
And diving into this other product, the Red Point Jacket, you can see that the original videos rotate into the background onto that plane, and we've overlaid yet another video, a testimonial from one of the North Face's athletes, expounding on why he likes the product so much. That was all written in XAML and those are very simple to do.
So during the course of development, we found that Windows Vista platform to be a very flexible platform development. These user interface components right there, features, expedition, technology that I'm surfing with the mouse, these are all just list box controls that have been styled, they're the same control. That's incredibly important to us, because we're able to rapidly change the look and feel of our applications without a lot of work. And, yes, these are two video feeds playing within a list box control.
JOE FLANNERY: And if you think about the challenge that I just mentioned at the beginning, this is going to allow us to have that interaction with the customer, so they get the validation from an athlete, see amazing product detail, but also see a 3-D animation on how the fabric technology works.
MARK BELANGER: So in accordance with what Joe was talking about, this is product rotation. (Applause.) Pretty cool. It's really important for online people to be able to see products at that level detail. And this is simply an image montage control we wrote. It's the same control that you saw at the very beginning of the demo that showed the images skating around, and if you look really carefully in the background, that same control is still running, subtly reinforcing The North Face's aspirational brand.
So we're going to have to go soon, but there's one last point I want to leave you with, and it's one of the most important for us, and I think it will be for you, too, and that's no developer productivity.
Tools that make us highly productive are critical to our success, and doing things like this, this kind of 3-D and video, it was not practical in our previous projects. So for this project, remember I told you that we were Web developers, the primary developer went from zero Windows applications development experience, and zero 3-D experience to this in fully functioning prototype form in a mere six weeks. That's simply astounding. (Applause.)
And the productivity gains are only going to increase, because of the new workflow development where you have XAML separate for handling layout and then logic on the other side. So what you're going to get is that engineers can concentrate on code and let the designers worry about pixel twiddling.
These kinds of deeply immersive customer experiences or what customers demand are critical to success for online retail. Courtesy of Windows Vista, because of the deep media integration, the flexible platform we were able to develop from, and the resulting developer productivity gains that we were able to eke out, we're only going to be moving faster in this space.
JOE FLANNERY: We see a tremendous opportunity at The North Face, not only online with our Web site design, but also at the point of sale. With this content and this interaction, we're going to blow away our competition.
MARK BELANGER: So we're sure you guys have some questions, we'll be in the exhibit hall if anyone wants to come by and understand how we built this thing. Before we go, we want to thank The North Face and Joe's team, the inspiration, and Jim and the Microsoft team for providing us with the tools to visualize it.
Thank you. (Applause.)
JOE FLANNERY: Thanks.
JIM ALLCHIN: Once people get a view of this, that's what they're going to demand. We're going to post on MSDN a whitepaper explaining how this app was written later on today.
So given this, users are going to expect a certain minimum set of capabilities when they have a Windows app. I can't possibly cover all the details; in fact, that's why we're having this conference. You're going to get specific information, very precise about what you should do in order to enable and take full advantage.
Now, there are a few things that I did want to highlight to just make sure that you did hear me earlier. It's going to be critical that you make your app run in standard user. We are committed to security, and we're going to make that the default.
But there are so many other things like taking advantage of the (i-filter ?) interface so that you can pull out the metadata, or taking advantage of the search controls in your apps, or configuring your app when you happen to be on battery, or taking advantage of the ability for the system to be in portrait or landscape; all those things are covered in the different sessions here.
Windows Vista Opportunity
OK, this is going to be an amazing release. It hasn't been since 1995 and Windows 95 that we've produced an operating system that had features for every audience. In the IT pro space think about all the work that we're doing in servicing and deployment and security. For the information worker think of all the search and organization and visualization, or even the simple meeting app experience that we're going to put in the system; consumers, all the games that are going to be there, the photos, the music, the video, all that. And it hasn't been since Windows 95 that we've had this rich an API for you.
The opportunities listed on this slide are mind-blowing. Analysts forecast that within the first 24 months of Windows Vista shipping, there will be greater than 475 million new PCs shipped. That's almost half a billion; a half a billion PCs! We anticipate 200 million bottom line case that are going to be upgradeable. That's opportunity for us together.
In addition, it's been a long time since enterprise has really gone through an upgrade cycle, really since Y2K, and they're ripe to go through that transition. In addition, the fact that we're having Vista and Office 12 coming out about the same time will, in fact, incent enterprises to consider the upgrade now.
You saw these demos in terms of the apps. Customers are going to expect that. We're trying to provide the platform for you, but it really is up to you. We think we've got the right platform; now you need to take it and build on it.
Windows Vista Partner Showcase Program
We're going to invest like never before. This is going to be the largest demand generation that we've ever done. We're going to do innovative things that we haven't done before. We've created a $100 million dedicated partner co-marketing program, and the goal is to drive the development of breakthrough apps using features available only on [Windows] Vista. We're going to be gathering feedback at this conference and then we're going to finalize how we're going to run this program before the end of the year.
If you want to participate, send [us] mail; you'll also hear about it more at this conference.
We want to be able to have apps showcased in all the different audiences, whether it be in consumer or enterprise.
Another way we can help is to connect you with customers in a friction-free way. Today, how many people really buy software and download it over the Internet? Some, but not as many as if we really had the infrastructure in place to do it. We released Windows Marketplace last October and it was a good start, but it really didn't have any download capabilities. And how many of you have downloaded a piece of software and purchased it? Wow, aren't you worried about the key, what if you lose it, you don't have the media, ooh. That is actually preventing some of the acceptance of that particular approach to purchasing software.
Today, we're going to preview something that we call the next generation of the Windows Marketplace called "Digital Locker," and the concept is that you can, in fact, try, download and purchase software online, and we will keep the information so that you feel safe that if you're moving from machine to machine or something happens to your machine, that you can get the keys back. We're doing that through partners today, and you can start using those partners as part of the preview by engaging with Digital River and eSellerate. There's more information on this at WindowsMarketplaceLabs.com.
Now, I think the best way to really explain what I'm talking about here is to show it, so I'd like to invite Dee Dee Walsh out to have a demo of the Digital Locker.
DEE DEE WALSH: Hey, Jim.
JIM ALLCHIN: Hi. (Applause.)
DEE DEE WALSH: Thank you. Hi there. I understand I'm the only thing between you and lunch, so I'm going to show you how fast it is to purchase software through the Digital Locker.
So I'm going to go ahead and purchase a piece of software called ACDC and, of course, we have Passport authentication. ACDC by ACD software, and this is a piece of software that will hopefully come up quickly, that is a graphics package. And I am going to go to a different machine actually.
All right, let's try that again. So I'm going to again search for this product called ACDC, and it's a cool graphics package, and there I go again.
This always happens when you're in front of a crowd of thousands of people.
And here it is. And it's a cool graphics package, I know I want to buy it, so I'm going to click Buy. And I already have a product called Rad Combo Box, which is a developer tool, and I'm going to go ahead and keep this going quickly so I'm going to check out. And notice that again this is just trying to make it very simple and easy for customers to feel like this is a trustworthy service. So I'm going to go ahead and keep moving with this purchase and authorize my stores and complete the purchase.
And now what you're seeing happen is the transaction being passed to Digital River and eSellerate, and what's great about this is that it's one transaction for multiple merchants. And the other thing is while we're launching today with Digital River and eSellerate, ultimately we will have many merchants that will be available; in fact, hopefully the merchants that your customers are most comfortable with. And in this case, again what's happening here is we're passing encrypted information to both of these vendors.
Now here is the thing that we've all been waiting for, which is just a one-click download. So all I have to do, this was per Jim, he said one click, that's all we want, and there we go, one click and now I'm in my Digital Locker.
And I just want to point out a few things in my Digital Locker that are key. The first is that it's integrated in with the Windows Vista shell, and what that means is that we have a very reliable download through Windows Update Services.
The other thing is that I'm going to show you a few things. Notice that I can look at all of my products in the Digital Locker. In fact, if you'll see here, it shows the status of each, for example, Synchro Magic Pro from GeloSoft, it says verified. What that means is that we verify that the bits as ISVs that you give us actually are what the customer will get.
The final thing I want to show you is what a customer can do in a Digital Locker. The first obviously at the bottom is to download the software, and the second is to create a backup CD, which our customers have said got to have the bits in hand. And, of course, the third thing is to be able to install from within the Digital Locker. And the final thing, which is the most important, as Jim mentioned, is getting my license information. So never again do I lose my license information and have to repurchase software; it's all available from here, rather than stuck in some desk drawer or hidden in e-mail somewhere, and, in fact, if I buy a new PC, all I have to do is sign in to the Digital Locker and boom, there's all my software and I can download it to my machine.
So that's the Digital Locker. Again, as Jim pointed out, please go to WindowsMarketplaceLabs.com to get more information to test this service out. And also our team is here at Ask the Experts on Thursday at 6:30, so we'd love to see you, we're here all week, thanks.
JIM ALLCHIN: Great, thank you. (Applause.)
Basically I'm not going to cover all the things on this slide. The bottom line, we're listening, keep talking to us. We're also trying to have fun, as you can see with the Channel 9 guy. If you have better ideas for having fun, just let us know. We're listening, these are the places to go to, to give us feedback.
So today you're going to get more bits than you ever have from us. As you leave, you'll get the goods. I feel like I'm pulling out a wallet here. (Applause.) You're getting everything from Beta 1 Vista to SQL Server to the tools, the WinFS beta. You'll also get that Build 5219, which is the pre-Beta 2 build of Windows Vista. Again, I want to caution you, put it on test machines only. You'll also get some other bits at the end of day three.
So during the last two hours, which is about what it's been, it's a whirlwind tour through the magic of what we think of as Windows Vista and all the associated tools around it. We're going to make it the most successful launch ever.
But there's one thing that we can't do, and that's you. We need your apps. We've created the platform, but we need the apps that are immersive the way customers expect. I want to thank you for your past support of the Windows platform, sincerely thank you, and I'm excited to see the kind of apps like you just saw here that you create.
Have a great PDC and have a great lunch. Thank you very much. (Applause.)
Microsoft Unveils New Platform Advancements at Microsoft Professional Developers Conference - Sept. 13, 2005
Webcast: Bill Gates, Jim Allchin at PDC 2005
Transcript: Bill Gates PDC 2005 Keynote - Sept. 13, 2005
Transcript: Eric Rudder PDC 2005 Keynote - Sept. 14, 2005
Transcript: Steven Sinofsky PDC 2005 Keynote - Sept. 14, 2005
Transcript: Bob Muglia PDC 2005 Keynote - Sept. 15, 2005
PDC 2005 Virtual Pressroom | http://www.microsoft.com/presspass/exec/Jim/09-13PDC2005.mspx | crawl-002 | refinedweb | 21,975 | 70.63 |
Image above (left to right): Morris Abram, Martin Luther King Jr., A. Philip Randolph, John Lewis, William T. Coleman
In the fall of 1965, after the Voting Rights Act passed, the coalition of black, socialist, and progressive leaders who had come together to organize 1963’s March on Washington joined together again to create an ambitious policy document with no less a goal than ending poverty in the United States without cost to taxpayers. First released in 1966, it proposed using strong economic growth to provide a federal jobs guarantee, universal health care, and a basic income. This executive summary of the full report, published in 1967, was endorsed by more than 100 signatories and was distributed in black neighborhoods.
The Atlantic has annotated the budget to show how its goals have been met or—in more cases—missed in the half century since then.
INTRODUCTION
I believe, and profoundly hope, that from this day forth the opponents of social progress can take comfort no longer, for not since the March on WashingtonThe March on Washington, although often remembered for Martin Luther King Jr.’s “I Have a Dream” speech, was organized largely by Asa Philip Randolph and his lieutenant, Bayard Rustin, who had advised King on Gandhian tactics of nonviolence. Twenty years earlier, during World War II, Randolph had developed plans—never realized—for a protest against segregation in the armed forces and discrimination in the defense industry by bringing masses of black Americans to Washington; this was a formative moment in the early civil-rights movement. has there been such broad sponsorship and enthusiastic support for any undertaking as has been mobilized on behalf of “The Freedom Budget for All Americans.”
These forces have not come together to demand help for the Negro. Rather, we meet on a common ground of determination that in this, the richest and most productive society ever known to man, the scourge of poverty can and must be abolished—not in some distant future, not in this generation, but within the next ten years!
The tragedy is that the workings of our economy so often pit the white poor and the black poor against each other at the bottom of society. The tragedy is that groups only one generation removed from poverty themselves, haunted by the memory of scarcity and fearful of slipping back, step on the fingers of those struggling up the ladder.
And the tragedy is that not only the poor, the nearly poor, and the once poor, but all Americans, are the victims of our failure as a nation to distribute democratically the fruits of our abundance. For, directly or indirectly, not one of us is untouched by the steady spread of slums, the decay of our cities, the segregation and overcrowding of our public schools, the shocking deterioration of our hospitals, the violence and chaos in our streets,The Freedom Budget was written in the time between two of the most destructive riots in black ghettos in U.S. history—in the Watts neighborhood of Los Angeles in 1965 and in Detroit two years later. the idleness of able-bodied men deprived of work, and the anguished demoralization of our youth.
For better or worse, we are one nation and one people. We shall solve our problems together or together we shall enter a new era of social disorder and disintegration.
What we need is an overall plan of attack.
This is what the “Freedom Budget” is. It is not visionary or utopian. It is feasible. It is concrete. It is specific. It is quantitative. It talks dollars and cents. It sets goals and priorities. It tells how these can be achieved. And it places the responsibility for leadership with the Federal Government, which alone has the resources equal to the task.
The “Freedom Budget” is not a call for a handout. It is a challenge to the best traditions and possibilities of America. It is a call to those who have grown weary of slogans and gestures to rededicate themselves to the cause of social reconstruction. It is a plea to men of good will to give tangible substance to long-proclaimed ideals.
A. Philip RandolphA. Philip Randolph, the socialist leader of the Brotherhood of Sleeping Car Porters, was one of the most influential black figures in American history. During the Great Migration, he advocated for the labor rights of the southern black workers who resettled in northern and western cities. He catalyzed the first wave of the civil-rights movement, spearheading the effort to force President Harry Truman to integrate the military in 1948. In 1965, he and Rustin founded the A. Philip Randolph Institute. The collaboration between Randolph and King on the Freedom Budget seemed like a passing of the torch. Randolph, who was 77 years old when the executive summary was issued, was four decades older than King, but he would outlive him by 11 years.
President, A. Philip Randolph Institute
October 26, 1966
FOREWORD
After many years of intense struggle in the courts, in legislative halls, and on the streets, we have achieved a number of important victories.When King wrote his foreword, the Civil Rights Act of 1964 and the Voting Rights Act of 1965 had become law, along with Medicare and Medicaid. We have come far in our quest for respect and dignity. But we have far.After King’s death, in 1968, President Lyndon B. Johnson and Congress scrambled to pass the Fair Housing Act, which is often touted as the third of the major Great Society civil-rights reforms. We shall eliminate unemployment for Negroes when we demand full and fair employment for all. We shall produce an educated and skilled Negro mass when we achieve a twentieth century educational system for all.A landmark study by UCLA researchers from 2014 showed mixed results on school desegregation in the 60 years since the Supreme Court’s Brown v. Board of Education decision. While public schools in the South were less racially segregated than during the pre-Brown era, the study found that gains for black students had steadily eroded starting in 1990. In the Northeast, segregation had actually become worse since King’s death.
This human rights emphasis is an integral part of the Freedom Budget and sets, I believe, a new and creative tone for the great challenge we yet face.
The Southern Christian Leadership Conference fully endorses the Freedom Budget and plans to expend great energy and time in working for its implementation.
It is not enough to project the Freedom Budget..
Martin Luther King
October 26, 1966
A “FREEDOM BUDGET” FOR ALL AMERICANS
The Freedom Budget is a practical, step-by-step plan for wiping out poverty in America during the next 10 years.
It will mean more money in your pocket. It will mean better schools for your children. It will mean better homes for you and your neighbors. It will mean clean air to breatheThe environment was never a core topic for Randolph’s or King’s organizations. But concern about clean air presaged the environmental-justice movement that took off in the 1980s, highlighting the disparate impacts of pollution on people of color; currently, asthma rates among black children are almost double those among white children. and comfortable cities to live in. It will mean adequate medical care when you are sick.The full 84-page Freedom Budget unambiguously calls for a “nationwide, universal system of health insurance,” a goal that civil-rights groups had pushed for. The legislation creating Medicare and Medicaid counted as a victory, but universal coverage would remain elusive. Today, Bernie Sanders and other progressive politicians have picked up many of the Freedom Budget’s recommendations, notably calling for a system of universal health insurance, which the Vermont senator describes as “Medicare for all.”
So where does the “Freedom” come in?
For the first time,.
This nation has learned that it must provide freedom for all if any of us is to be free. We have learned that half-measures are not enough. We know that continued unfair treatment of part of our people breeds misery and waste that are both morally indefensible and a threat to all who are better off.
As A. Philip Randolph put it: “Here in these United States, where there can be no economic or technical excuse for it, poverty is not only a private tragedy but, in a sense, a public crime. It is above all a challenge to our morality.”
The Freedom Budget would make that challenge the lever we can grasp to wipe out poverty in a decade.
Pie in the sky?
Not on your life. Just simple recognition of the fact that we as a nation never had it so good. That we have the ability and the means to provide adequately for everyone. That simple justice requires us to see that everyone—white or black; in the city or on the farm; fisherman or mountaineer—may have his share in our national wealth.
The moral case for the Freedom Budget is compelling.
In a time of unparalleled prosperity, there are 34 million Americans living in poverty. Another 28 million live just on the edge, with income so low that any unexpected expense or loss of income could thrust them into poverty.According to the Census Bureau, the poverty rate in the United States fluctuated between 11 percent and 15 percent from 1966 to 2012. Overall, the rate of Americans living in near-poverty has been fairly flat over the past half century.
Almost one-third of our nation lives in poverty or want. They are not getting their just share of our national wealth.
Just as compelling, this massive lump of despair stands as a threat to our future prosperity. Poverty and want breed crime, disease and social unrest. We need the potential purchasing and productive power the poor would achieve, if we are to continue to grow and prosper.
In short, for good times to continue—and get better—we must embark immediately on a program that will fairly and indiscriminately provide a decent living for all Americans …
The Freedom Budget shows how to do all this without a raise in taxes and without a single make-work job—by planning prudently NOW to use the economic growth of the future, and with adequate attention to our international commitments.
The key is jobs.
We can all recognize that the major cause of poverty could be eliminated, if enough decently paying jobs were available for everyone willing and able to work. And we can also recognize that, with enough jobs for all, a basic cause of discrimination among job-seekersThe past 25 years have seen “no change in the level of hiring discrimination against African Americans,” according to a recent study published in the Proceedings of the National Academy of Sciences. An earlier study concluded that the gap in the participation of black and white young men in the labor force worsened from 1979 to 2000. would automatically disappear.
What we must also recognize is that we now have the means of achieving complete employment—at no increased cost, with no radical change in our economic system, and at no cost to our present national goals—if we are willing to commit ourselves totally to this achievement.The idea of a federal jobs guarantee has been carried forward in the work of the economists William Darity Jr. of Duke University and Darrick Hamilton of the New School. They argue that the guarantee would go a long way toward easing racial disparities in wealth and would cost roughly $750 billion for 15 million adults—half of the Freedom Budget’s $1.5 trillion price tag in current dollars.
That is what the Freedom Budget is all about.
It asks that we unite in insisting that the nation plan now to use part of its expected economic growth to eliminate poverty.
Where will the jobs come from? What will we use for money?
If all our nation’s wealth were divided equally among all us Americans, each share would be worth roughly $3,500. Of this, we grant to the Federal government a slice equal to roughly $500 in the form of taxes, leaving us an average of about $3,000 to spend on our other needs.
If our nation’s productivity continues growing at the same rate as in recent years—and it will if the Freedom Budget is adopted—each share will grow to about $5,000. Thus, the Federal government’s slice will grow to $700, with the present Federal tax structure, The Freedom Budget would have been financed not with major tax increases but with what essentially would have been a stimulus, fueled by the program’s own effects on economic growth. In the full report, the authors suggest that if higher taxes were to become necessary, the government should “impose the burden where it can easily be borne.” Perhaps today, the authors would be even more inclined to impose that burden on the rich, given that inequality in wealth and income has exploded since 1963. According to the Urban Institute, families in the lowest 10 percent of wealth-holders in 1963 could expect to have about zero net worth, while their counterparts in the 90th percentile had a net worth of just under $250,000 (in 2016 dollars). In 2016, however, families in the 10th percentile were nearly $1,000 in debt, on average, while families in the 90th percentile boasted a net worth of more than $1 million. and we will still have $4,300 left for our other needs.
What the Freedom Budget proposes is this: Budget a fraction of the $200 increase in Federal tax revenues to provide jobs for all who can work and adequate income of other types for those who cannot.
No doles. No skimping on national defense. No tampering with private supply and demand.
Just an enlightened self-interest, using what we have in the best possible way.
By giving the poor a chance to become dignified wage earners, we will be generating the money to finance the improvements we all need—rich and poor alike. And we would be doing it by making new jobs with new money, so that no one who is now earning his own living would suffer.
The Freedom Budget recognizes that the Federal government must take the lead in attaining the eradication of poverty.
The Federal government alone represents all 200 million American individuals. It alone has the resources for a comprehensive job [guarantee]. And it has the responsibility for fulfilling the needs which are the basis for the Freedom Budget plan.
First, here’s where the jobs would be coming from:
- Right now, the nation should begin budgeting to replace the 9.3 million “seriously deficient” housing units Even as definitions of poor housing have changed over time, studies have noted slow progress in its improvement, especially for the poor and people of color. The Center on Budget and Policy Priorities reported in 1989 that the nation’s total substandard housing units—7.7 million in 1975—had dropped only to 7.4 million by 1985. Black and Hispanic people made up 17 percent of U.S. households in 1985, the group found, but constituted 42 percent of households living in substandard conditions. A 2015 study in the Journal of Housing Research found that blacks were 31 percent less likely to live in adequate housing than whites were. that make living in them a misery and form slums that are a blight upon our land.
The housing program contained in the Freedom Budget would have practically all Americans decently housed by 1975—while providing a wide range of jobs for the unemployed in housing construction and urban redevelopment.
- Critical shortages of water and power persist in many highly populated areas. Air and waters remain polluted. Recreation facilities are unavailable for those who need them most.
The Freedom Budget proposes the creation of millions of jobs in a program that will correct these pressing problems.
- We need, at a conservative estimate, 100,000 new public classrooms a year for the next six years, as well as considerable expansion of our institutions of higher learning.
Only the Federal government can meet the largest share of these needs, as well as providing for the hundreds of thousands of new teachers who also will be needed.According to the National Center for Education Statistics, the ratio of pupils to teachers has steadily dropped since the mid-1950s as more teachers have entered the workforce.
- We must double our rate of hospital construction if we are to keep up with our minimum requirements in this field, and we must expand rehabilitation and outpatient facilities.
As these and other programs swell the number of productive workers, cut down unemployment and increase consumption, the private sector of our national economy will inevitably grow also.
The Freedom Budget recognizes that full employment by itself is not enough to eradicate poverty. Therefore, it also proposes—and budgets for—a $2-an-hour Federal minimum wageThe federal minimum wage was a relatively new concept for most employees in 1967. In 1974, amendments to the Fair Labor Standards Act set a wage floor for federal, state, and local government employees. Over time, further amendments would make the federal minimum wage ubiquitous. (States’ minimum wages, if higher, preempt the federal mandate.) The $2-an-hour wage that the Freedom Budget proposed is almost exactly equal to $15 in today’s dollars. The modern-day “Fight for $15” movement has cited Martin Luther King Jr. as an intellectual forefather; in 2017, it held dozens of events on the anniversary of King’s death. covering everyone within Federal jurisdiction; a new farm program to provide adequate income to the 43 per cent of farm families who now live in poverty; and immediate improvements in Social Security, welfare, unemployment compensation, workmen’s compensation and other programs designed to support those who cannot or should not work.
Where will the money come from?
The Freedom Budget recognizes that we cannot spend what we do not produce. It also recognizes that we must spend wisely what we do produce.
It proposes that a portion of our future growth—one thirteenth of what can reasonably be expected to be available—be earmarked for the eradication of poverty. The Freedom Budget proposed outlay of $185 billion in 10 yearsThe proposed $185 billion budget would translate to $1.5 trillion in 2017 dollars, an amount that, if spread across federal outlays to poverty programs, would just about quadruple the government’s welfare budget. sounds like a great deal of money, and it is a great deal of money.
But it will come from the expansion of our economy that will in part be the result of wise use of that very $185 billion. It will build homes and schools, provide recreation areas and hospitals. It will train teachers and nurses.
It will provide adequate incomes to millions who now do not have them. And those millions will in turn buy goods they cannot now buy.
So the wage earner of today will benefit as well. His earnings will go up and his enjoyment of life will be increased. The opportunities for private enterprise will increase.
The breeding grounds of crime and discontent will be diminished in the same way that draining a swamp cuts down the breeding of mosquitoes, and the causes of discrimination will be considerably reduced.
But the Freedom Budget cannot become reality without a national effort. It requires a concentrated commitment by all the people of America, expressed in concrete goals and programs of the Federal Government. These goals and programs must encourage to the utmost the efforts of state and local governments and private enterprise.
It is not lack of good-will that has prevented the achievement of these great goals in the past. All of us, 200 million strong, are united in our willingness to share the abundance of America in equal impartiality with our fellows, and to grant equal opportunities to all.
What we must do—and what the Freedom Budget provides—is to express that will in the most direct, quickest and fairest way.
The Freedom Budget, then, is a new call to arms for a final assault on injustice. It is a rallying cry we cannot fail to heed.
This annotation of the 1967 executive summary of the full 1966 “Freedom Budget” report appears in the special MLK issue print edition with the headline “A ‘Freedom Budget’ for All Americans.” | https://www.theatlantic.com/magazine/archive/2018/02/a-freedom-budget-for-all-americans-annotated/557024/ | CC-MAIN-2019-35 | refinedweb | 3,435 | 60.85 |
Records the inputs and outputs of scripts
Project description
Pirec is a Python package for wrapping scripts so that their inputs and outputs are preserved in a consistent way and results are recorded.
Example
from pirec import call, record, pipeline from pirec pirec
Requirements
Pirec is tested with Python v2.7 - 3/pirec/issues
- Source Code: github.com/jstutters/pirec
Support
If you are having problems, please let me know by submitting an issue in the tracker.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
pirec-0.11.0.linux-x86_64.tar.gz (18.8 kB view hashes)
Built Distribution
pirec-0.11.0-py2.py3-none-any.whl (14.5 kB view hashes) | https://pypi.org/project/pirec/ | CC-MAIN-2022-27 | refinedweb | 137 | 67.15 |
Hi,
I wanted to find about few things that I didnt see in tutorials in this page.
I've been working on very simple program for few hours, but I still cant figure out few things, can't find them either.
This is the code I came up with.
is it possible to input unlimited amount of nummbers nut just number 1 + number 2?is it possible to input unlimited amount of nummbers nut just number 1 + number 2?Code:#include <iostream> using namespace std; int main () { int a, b; float result; cout << "Please enter first number" << endl; cin >> a; cout << "Please enter second number" << endl; cin >> b; cout << "Average is" << endl; result = (a + b)/2.0; cout << result << endl; system ("pause"); return (0); }
If yes then how ? for example 1+45+34
And how can I change result function so it understands how many numbers I have inputed and calculates the average value?
If I input a letter instead of nummber how can I make it so program says that it is invalid input value or something like that ?
Thank you in advance,
I read topics about posting and homework and I hope that I don't offend anyone with this topic
P.S Sorry for any english mistakes ! | https://cboard.cprogramming.com/cplusplus-programming/94686-i-have-few-questions-about-basic-cplusplus-programing.html | CC-MAIN-2017-22 | refinedweb | 210 | 70.02 |
Will it be possible to use PyCharm as an editor for IronPython? Follow dhendricks Created February 18, 2010 23:12 Essentially, will there be a way to import namespace/class info for the FCL and get code analysis and refactoring support for IronPython?Thanks
Hello Daryl,
Not sure what you mean by FCL. :-) Selecting IronPython as a target interpreter
will definitely be supported in 1.0. As for importing .NET assemblies and
providing code completion for .NET types - don't know yet how hard this would
be, so no definite plans yet.
--
Dmitry Jemerov
Development Lead
JetBrains, Inc.
"Develop with Pleasure!"
Great to hear it will be supported as an interpreter and I hope you'll pursue IP as a supported target for PyCharm.
FWIW, I've been using Michael Foord's code to inform Wing of .NET types. He writes about it at.
Hello Daryl,
Thanks a lot for the pointer! I think we'll be able to use the same solution
for PyCharm.
--
Dmitry Jemerov
Development Lead
JetBrains, Inc.
"Develop with Pleasure!"
I found this video helpful : | https://intellij-support.jetbrains.com/hc/en-us/community/posts/205798769-Will-it-be-possible-to-use-PyCharm-as-an-editor-for-IronPython-?page=1 | CC-MAIN-2020-16 | refinedweb | 179 | 69.79 |
Summary: Microsoft Scripting Guy, Ed Wilson, shows four ways to create folders with Windows PowerShell, and he discusses the merits of each approach.
Hey, Scripting Guy! I am trying to find the best way to create a new folder while using Windows PowerShell. I have seen many ways of creating folders in scripts that I have run across on the Internet. They all seem to use something different. Is there a best way to create a new folder?
—GW
Hello GW,
Microsoft Scripting Guy, Ed Wilson, is here. This morning on Twitter, I saw a tweet that said “New-TimeSpan tells me that there are 41 days until the 2012 Scripting Games.” Indeed. Even though I have been busily working away on the Scripting Games, it seems awfully soon. I fired up Windows PowerShell and typed the following code:
New-TimeSpan -Start 2/21/12 -End 4/2/12
Sure enough, the returned TimeSpan object tells me that there are indeed 41 days until the 2012 Scripting Games. I decided I would like a cleaner output, so I used one of my Top Ten Favorite Windows PowerShell Tricks: group and dot. The revised code is shown here.
(New-TimeSpan -Start 2/21/12 -End 4/2/12).days
Both of these commands and the associated output are shown in the image that follows.
GW, you are correct, there are lots of ways to create directories, and I will show you four of them…
Method 1
It is possible to use the Directory .NET Framework class from the system.io namespace. To use the Directory class to create a new folder, use the CreateDirectory static method and supply a path that points to the location where the new folder is to reside. This technique is shown here.
[system.io.directory]::CreateDirectory(“C:\test”)
When the command runs, it returns a DirectoryInfo class. The command and its associated output are shown in the image that follows.
I do not necessarily recommend this approach, but it is available. See the Why Use .NET Framework Classes from Within PowerShell Hey, Scripting Guy! blog for more information about when to use and not to use .NET Framework classes from within Windows PowerShell.
Method 2
Another way to create a folder is to use the Scripting.FileSystemObject object from within Windows PowerShell. This is the same object that VBScript and other scripting languages use to work with the file system. It is extremely fast, and relatively easy to use. After it is created, Scripting.FilesystemObject exposes a CreateFolder method. The CreateFolder method accepts a string that represents the path to create the folder. An object returns, which contains the path and other information about the newly created folder. An example of using this object is shown here.
$fso = new-object -ComObject scripting.filesystemobject
$fso.CreateFolder(“C:\test1”)
This command and its associated output are shown in the following image.
Method 3
GW, it is also possible to use native Windows PowerShell commands to create directories. There are actually two ways to do this in Windows PowerShell. The first way is to use the New-Item cmdlet. This technique is shown here.
New-Item -Path c:\test3 -ItemType directory
The command and the output from the command are shown here.
Compare the output from this command with the output from the previous .NET command. The output is identical because the New-Item cmdlet and the [system.io.directory]::CreateDirectory command return a DirectoryInfo object. It is possible to shorten the New-Item command a bit by leaving out the Path parameter name, and only supplying the path as a string with the ItemType. This revised command is shown here.
New-Item c:\test4 -ItemType directory
Some might complain that in the old-fashioned command interpreter, cmd, it was easier to create a directory because all they needed to type was md––and typing md is certainly easier than typing New-Item blah blah blah anyday.
Method 4
The previous complaint leads to the fourth way to create a directory (folder) by using Windows PowerShell. This is to use the md function. The thing that is a bit confusing, is that when you use Help on the md function, it returns Help from the New-Item cmdlet—and that is not entirely correct because md uses the New-Item cmdlet, but it is not an alias for the New-Item cmdlet. The advantage of using the md function is that it already knows you are going to create a directory; and therefore, you can leave off the ItemType parameter and the argument to that parameter. Here is an example of using the md function.
md c:\test5
The command and its associated output are shown here.
You can see from the image above that the md function also returns a DirectoryInfo object. To me, the md function is absolutely the easiest way to create a new folder in Windows PowerShell. Is it the best way to create a new folder? Well, it all depends on your criteria for best. For a discussion of THAT topic, refer to my Reusing PowerShell Code—What is Best blog.
GW, that is all there is to creating folders. Join me tomorrow for, especially with explaining and giving examples of the different methods one can use within PowerShell. As always you are the best Ed.
@B-K that is a great tip … in fact, I was not aware of that issue. It does go back to the basic idea however … when you are writing a script, you should spell things out … it is always safest. Thank you … I imagine it was a bear of a thing to troubleshoot!
Hi Ed,
I would recommend and in fact I usually do use the "md <dir>" function most of the time. I didn't realize that this a function btw. … but do we really have to …?
I'm still used to the way cmd.exe works and so I don't even think too much about the wonder of the "md" function in PS
But still … it is great and remarkable that it returns an object to work with later on! That's a crucial difference between the cmd.exe-md and the ps-md version!
Klaus (Schulte)
@K_Schulte no you do not HAVE to know that it is a function and not a cmdlet. But knowing that it is a function you are using, and NOT the intrinsic md command that exists in the cmd shell is important. The reason is that it helps to explain some of the differences you will find. Besides, personally, I like to know what I am doing. The more I know about what is actually going on under the covers of PowerShell the better I am able to unerstand some of the strangeness of PowerShell. The cool thing about PowerShell is that it is easy to learn, but nearly impossible to master.
Be careful using md/mkdir in scripts where Set-StrictMode is on… More details here:
stackoverflow.com/…/powershell-mkdir-alias-set-strictmode-version-2-strange-bug-why
Nice, this helped me alot- Espen
Method 4 works best..Classic and old DOS habit..
$xFolder = {md d:xtxt}
Invoke-Command $xFolder
Thanks TechNET! Rock ON!!!
Hello Ed,
How would you go about using the New-Item -ItemType Directory -Path with a unc path? I have tried this with full admin rights to the path i am trying to create a directory in.
Cheers,
Just for the info, md is a alias to mkdir, which in turn is a nice advanced function which invokes new-item. Would be nice to see a native cmdlet for this. Something like New-Folder, Set-Folder, Remove-Folder.
Hi Ed,
When i tried:
PS C:Userssrini_000> md D:S.K2014 BeyondMy CoursesPythonmystuff
mkdir : A positional parameter cannot be found that accepts argument ‘BeyondMy’.
At line:1 char:1
+ md D:S.K2014 BeyondMy CoursesPythonmystuff
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [mkdir], ParameterBindingException
+ FullyQualifiedErrorId : PositionalParameterNotFound,mkdir
But it worked, when
PS C:Userssrini_000> md ‘D:S.K2014 BeyondMy CoursesPythonmystuff’
Path is in quotes.
Any reason ?
Thanks for throwing that light in making a directory in windows powershell, how then can I change to the directory made in powershell? | https://blogs.technet.microsoft.com/heyscriptingguy/2012/02/21/learn-four-ways-to-use-powershell-to-create-folders/ | CC-MAIN-2016-30 | refinedweb | 1,369 | 65.73 |
.
As a SciFi fan, Robots and Artificial Intelligence has always fascinated me. Bots are nothing but software applications which run automated tasks. They can accept commands and perform tasks which are structured.
Microsoft Bot Framework lets you easily build Bots and connect them to various channels like Skype, Slack, Office 365 Email and many more. If you have an existing Bot, you can also connect them to various channels. This means you don’t have to rewrite or build new Bots for every channel (Skype, Slack, Twitter etc) if you want to target a wider audience. You can concentrate on building better bots and let Bot Framework take care of connecting to other 3rd party services. Here is a screenshot of one of my Bot admin page.
As you can see, my bot, Knowledge Guru, is connected to Skype and Web Chat. This means my Bot has a face and any one can interact with it. Any Skype User can send message to this bot and I can also embed this Bot in any website. You can ask Knowledge Guru about any Academic topic and retrieve papers published by various Authors and Affiliations within seconds. The chat control you see in this page is provided by the Bot Framework. This is very useful and saves us some time.
Note: This is just a starting point. There are bugs in the app and I will be eliminating them and making the conversation smarter as I build on it.
Bots are not new and Microsoft is not the first player in the new Bot race! Many enthusiasts including me have created automated services and have used them in our day to day life. In 2008, I created my first Bot, RemindMeAbout, a Twitter Bot which accepts commands as tweets and adds reminders to my calendar. This bot was simple and accepted only SPECIFIC commands. It was not intelligent to understand spelling mistakes or different sentences. This is where Microsoft Cognitive services comes into play.
Microsoft Cognitive services lets you add intelligence to your Apps. They are a set of services which can be used to add different intelligent capabilities like Language Understanding, Vision, Speech etc. with very little code. This means you can interact with your bot with natural language and talk to it like a human.
I encourage everyone to visit and read through all the services. Microsoft has provided a good explanation and some helpful videos to understand each service.
If you are aware of Azure Machine Learning Services or Cortana Analytics Suit, then this service from Microsoft is not entirely new to you. It was possible to setup a Machine Learning Experiment, build a Model, run Experiments, publish it as a Service and use it in your app. But this was always a little challenging for me as I was not a Data Scientist and Machine Learning is not exactly my area of expertise. Cognitive Services makes this process easy for an everyday developer to concentrate more on building better apps with intelligent services and not to worry about setting it up.
I will demonstrate this with a simple fun Bot and let's call it Emoji Bot. Make sure you read the documentation at and - Emotion API of Microsoft Cognitive Services. Emoji bot will accept any photo as a message and return emojis based on the emotion detected by Emotion API. Here is a screenshot
Make sure your development environment is ready for Bot Framework development. We will be using the Free Visual Studio 2015 Community Edition. Visit Bot Framework getting started page,, and download and install the necessary project template as detailed in the “Getting started in .NET” section. Make sure all Visual Studio extensions are updated before starting the project.
1. Create a new Bot Application using Visual Studio 2015
2. Replace the auto generated code for Post method in MessageController.cs with the following code
if (message.Type == "Message")
{
//Get attachment
if (message.Attachments.Count > 0)
{
string imageUri = message.Attachments[0].ContentUrl;
return message.CreateReplyMessage(await Utilities.CheckEmotion(imageUri));
}
else
{
return message.CreateReplyMessage("Send me a photo!");
}
}
else
{
return HandleSystemMessage(message);
}
In the code, we check the number of attachments. Images sent in a message are retrievable through Attachments property. We then use this image to detect faces and their respective emotions in step 3.
3. Add the following code to MessageController.cs file:
public static class Utilities
{
public static async Task CheckEmotion(string query) { var client = new HttpClient(); var queryString = HttpUtility.ParseQueryString(string.Empty); string responseMessage = string.Empty; // Request headers client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "YOUR SUBSCRIPTION KEY"); // Request parameters var uri = ""; HttpResponseMessage response = null; byte[] byteData = Encoding.UTF8.GetBytes("{ 'url': '" + query + "' }"); using (var content = new ByteArrayContent(byteData)) { content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); response = await client.PostAsync(uri, content).ConfigureAwait(false); } string responseString = await response.Content.ReadAsStringAsync(); EmotionResult[] faces = JsonConvert.DeserializeObject
In this code, we pass the image to Emotion API to detect faces. If faces are found, then their emotional scores are sorted. We can then use this value to reply with an appropriate emoji. Make sure you get your subscription key by visiting
4. Add the following classes to MessageController.cs
public class Scores
{
public double anger { get; set; }
public double contempt { get; set; }
public double disgust { get; set; }
public double fear { get; set; }
public double happiness { get; set; }
public double neutral { get; set; }
public double sadness { get; set; }
public double surprise { get; set; }
}
public class EmotionResult
{
public Scores scores { get; set; }
}
Make sure you also update your AppId and Secret in app.config after creating a bot in .
Once the above steps are completed, Enable Web channel and test your bot by adding an iframe to the default.html file. | http://www.dotnetcurry.com/csharp/1281/simple-bot-using-microsoft-bot-framework-cognitive-services | CC-MAIN-2018-09 | refinedweb | 953 | 57.67 |
Write sequence of int with FileStorage in Python
Hello,
I want to write a defaultConfig.xml file using the cv2.FileStorage Python bindings.
For now I am using another lib to do the defaultConfig.xml generation because of a limit of the python bindings to write a right
<camera_resolution> sequence of integers (defaultConfig.xml example).
I have a simple fix (bellow) and I would like to know if this is the best way to go.
The problem is with the <camera_resolution> integers sequence. I can't manage to write it using the actual python binding, as it is written as an
opencv-matrix (
Mat). Maybe the problem is that a Python binding must be create specifically for the sequence of integers, like it has be done for the sequence of Strings.
import cv2 fs = cv2.FileStorage('test.xml', cv2.FILE_STORAGE_WRITE) fs.write("seq_int", [640, 480]) fs.release()
Produce the following problematic test.xml file:
<?xml version="1.0"?> <opencv_storage> <seq_int type_id="opencv-matrix"> <rows>2</rows> <cols>1</cols> <dt>d</dt> <data> 640. 480.</data></seq_int> </opencv_storage>
In the last pre-4.0 released there are bindings for sequence of strings but not for sequence of integers. Line 202 in /modules/core/src/persistence_cpp.cpp
void FileStorage::write( const String& name, const std::vector<String>& val ) { *this << name << val; }
I think I will propose as a PR (here is my actual fork) to add sequence of integers support:
void FileStorage::write( const String& name, const std::vector<int>& val ) { *this << name << val; }
With this add, running test like in test_persistence.py
import cv2 fs = cv2.FileStorage('test.xml', cv2.FILE_STORAGE_WRITE) fs.write("seq_int", [640, 480]) fs.release()
And I get the following good test.xml file:
<?xml version="1.0"?> <opencv_storage> <seq_int> 640 480</seq_int> </opencv_storage>
Do you thing it is the right way to go ?
I have read the PR discussion about vect strings support but could not find a definitive answer.
Thanks a lot if you take the time to read, and even more if you answer! | https://answers.opencv.org/question/199743/write-sequence-of-int-with-filestorage-in-python/ | CC-MAIN-2022-21 | refinedweb | 340 | 67.15 |
This package contains public classes for the Java code API of Facelets.
See:
Description
This package contains public classes for
the Java code API of Facelets. The vast majority of Facelets users have
no need to access the Java API and can get all their work done using the
tag-level API. These classes are provided for users that have a need
for a Java API that allows participation in the execution of a Facelets
View, which happens as a result of the runtime calling
ViewDeclarationLanguage.buildView().
The most common usecase for participating in the execution of a
Facelets View is to provide a custom tag handler in those cases when the
non-Java API methods for doing so is not sufficient. In such cases,
Java classes may extend from
ComponentHandler,
BehaviorHandler,
ConverterHandler, or
ValidatorHandler depending upon
the kind of JSF Java API artifact they want to represent in the Facelets
VDL page.
Copyright © 2009-2011, Oracle Corporation and/or its affiliates. All Rights Reserved. Use is subject to license terms.
Generated on 10-February-2011 12:41 | https://docs.oracle.com/javaee/6/api/javax/faces/view/facelets/package-summary.html | CC-MAIN-2016-26 | refinedweb | 178 | 54.22 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.