text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
The C Standard, 7.21.9.3 [ISO/IEC 9899:2011], defines the following behavior for
fsetpos():
The
fsetposfunction sets the
mbstate_tobject (if any) and file position indicator for the stream pointed to by
streamaccording to the value of the object pointed to by
pos, which shall be a value obtained from an earlier successful call to the
fgetposfunction on a stream associated with the same file.
Invoking the
fsetpos() function with any other values for
pos is undefined behavior.
Noncompliant Code Example
This noncompliant code example attempts to read three values from a file and then set the file position pointer back to the beginning of the file:
#include <stdio.h> #include <string.h> int opener(FILE *file) { int rc; fpos_t offset; memset(&offset, 0, sizeof(offset)); if (file == NULL) { return -1; } /* Read in data from file */ rc = fsetpos(file, &offset); if (rc != 0 ) { return rc; } return 0; }
Only the return value of an
fgetpos() call is a valid argument to
fsetpos(); passing a value of type
fpos_t that was created in any other way is undefined behavior.
Compliant Solution
In this compliant solution, the initial file position indicator is stored by first calling
fgetpos(), which is used to restore the state to the beginning of the file in the later call to
fsetpos():
#include <stdio.h> #include <string.h> int opener(FILE *file) { int rc; fpos_t offset; if (file == NULL) { return -1; } rc = fgetpos(file, &offset); if (rc != 0 ) { return rc; } /* Read in data from file */ rc = fsetpos(file, &offset); if (rc != 0 ) { return rc; } return 0; }
Risk Assessment
Misuse of the
fsetpos() function can position a file position indicator to an unintended location in the file.
Automated Detection
Related Vulnerabilities
Search for vulnerabilities resulting from the violation of this rule on the CERT website.
Related Guidelines
Key here (explains table format and definitions)
|
https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87152071
|
CC-MAIN-2019-22
|
refinedweb
| 306
| 50.46
|
Life
Thread Life Cycle Example in java
Thread Life Cycle Example in java
In this section we will read about the life cycle example of Thread in Java.
Life cycle of Thread explains you the various... when its run()
method execution is completed. The life cycle of Thread is ended
Explain about threads:how to start program in threads?
is created by extending the Thread class.
Threads have three stages in its life...; Learn Threads
Thread is a path of execution of a program... more than one thread. Every program has at least one thread. Threads are used
Threads in Java
Threads in Java help in multitasking. They can stop or suspend a specific... and allows other threads to execute.
Example of Threads in Java:
public class Threads{
public static void main(String[] args){
Thread th = new Thread
disadvantage of threads
java libraries are not thread safe. So, you should be very care full while using... is the disadvantage of threads?
hello,
The Main disadvantage of in threads... disadvantage of Threads.
Let?s discuss the disadvantages of threads.
The global
Sync Threads
://
Thanks...Sync Threads "If two threads wants to execute a synchronized method in a class, and both threads are using the same instance of the class to invoke
Java Threads - Java Beginners
the thread that makes resource available to notify other threads...:
http...Java Threads Why we use synchronized() method? Hi Friend
Life cycle of Servlet
Life cycle of Servlet
This article is discussing about the Life cycle of Servlet and teaches you
the Servlet Life cycle methods. As a beginner you should understand
threads in java - Java Beginners
threads in java what is the difference between preemptive scheduling and time slicing?
hi friend,
In Preemptive scheduling, a thread... the waiting or dead states or the higher priority thread comes into existence
Synchronized Threads
;
In Java, the threads are executed independently to each
other. These types....
Java's synchronized is used to ensure that only one thread is in a critical... operations. For Example if several threads were sharing a stack, if
one thread
Thread in java
Overview of Threads
Threading in Java
Thread Creation...A thread is a lightweight process which exist within a program and executed
to perform a special task. Several threads of execution may be associated
Threads
Threads class Extender extends Thread
{
Extender(Runnable...();}
public void run(){
System.out.println("Extender Thread is Started :");
//new Thread(new Implementer()).start
Java threads
Java threads What are the two basic ways in which classes that can be run as threads may be defined
multi threads - Java Beginners
multi threads Hi i writing a multi threaded program in java .I m using three threads. I want to declare variables which will be available to all the threads to access. Is there a way to declare the variables as global variables
java threads - Java Beginners
java threads What is Thread in Java and why it is used
Threads
Threads public class P3 extends Thread{
void waitForSignal() throws InterruptedException {
Object obj = new Object... in thread "main" java.lang.IllegalMonitorStateException
Execution of Multiple Threads in Java
Execution of Multiple Threads in Java Can anyone tell me how multiple threads get executed in java??I mean to say that after having called the start... to instantiate more than two instances of the same class which extends Thread,how
threads - Java Interview Questions
threads what is thread safe.give one example of implementation of thread safe class? hi friend,
Thread-safe code is code that will work even if many Threads are executing it simultaneously. Writing it is a black
Threads - Java Interview Questions
://
Thanks...Threads creating a thread is two ways?extends with thread class... one is the best way to create thread .i want region plz help me? Hi
threads - Java Interview Questions
threads how one thread will know that another thread is being processing
Daemon Threads
Daemon Threads
In Java, any thread can be a Daemon thread. Daemon threads are
like a service... thread. Daemon threads are used for background supporting tasks and are only
threads in java
threads in java iam getting that the local variable is never read in eclipse in main classas::
class Synex4{
public static void main(String args[]){
Test1 ob1=new Test1(); //local variable never read
Threads - Java Beginners
Threads hi,
how to execute threads prgm in java? is it using...;
private volatile int curFrame;
private Thread timerThread;
private...("
Threads - Java Beginners
Threads Hi all,
Can anyone tell me in detail about the following question.
when we start the thread by using t.start(),how it knows that to execute run()method ?
Thanks in advance.
Vinod
Java - Threads in Java
Java - Threads in Java
Thread is the feature of mostly languages including Java. Threads... be increased
by using threads because the thread can stop or suspend a specific
java threads - Java Interview Questions
java threads How can you change the proirity of number of a thread... the priority of thread.
Thanks Hi,
In Java the JVM defines priorities for Java threads in the range of 1 to 10.
Following is the constaints defined
interfaces,exceptions,threads
:
Exception Handling in Java
Threads
A thread is a lightweight process which... with multiple threads is referred to as a multi-threaded process.
In Java Programming...interfaces,exceptions,threads SIR,IAM JAVA BEGINER,I WANT KNOW
java threads - Java Beginners
java threads What are the two basic ways in which classes that can be run as threads may be defined
creating multiple threads - Java Beginners
creating multiple threads demonstrate a java program using multiple thread to create stack and perform both push and pop operation synchronously... MyThread extends Thread{
MyThread(String s){
super(s);
start
Diff between Runnable Interface and Thread class while using threads
Diff between Runnable Interface and Thread class while using threads Diff between Runnable Interface and Thread class while using threads
Hi Friend,
Difference:
1)If you want to extend the Thread class
Jsp life cycle.
Jsp life cycle. What is JSP life cycle?
Life cycle of JSP (Java Server Page),after translated it works like servlet.It contain 7 phases.... This method is called
only one time during JSP life cycle.
6.Call
Daemon Threads
Daemon Threads
This section describe about daemon thread in java. Any thread can be a daemon
thread. Daemon thread are service provider for other thread running in same
process, these threads are created by JVM for background task
Creating multiple Threads
In this section you will learn how to create multiple thread in java. Thread... or you can set or get the priority of thread. Java API provide method
like... are lightweight process. There are two way to create
thread in java
Java Thread
and then
the other thread having priority less than the higher one.
Life cycle of thread :
Diagram - Here is pictorial representation of thread life
cycle.
State of Thread Life cycle -
New : This is the state where new thread is created
threads
threads what are threads? what is the use in progarmming
SCJP Module-8 Question-4
Given a sample code:
public class Test3 {
public static void main(String args[]) {
Test1 pm1 = new Test1("Hi");
pm1.run();
Test1 pm2 = new Test1("Hello");
pm2.run();
}
}
class Test1 extends Thread {
private
implementing an algorithm using multi threads - Java Beginners
data from one thread to another thread.
I am posting my algorithm which needs to breakdown into two or three threads and need to implemented and need...;
double d=-6;
double F=0.0;
double lamsq=0.25;
Thread t1 = new Thread
Count Active Thread in JAVA
Count Active Thread in JAVA
In this tutorial, we are using activeCount() method of
thread to count the current active threads.
Thread activeCount... of active threads in the current thread group.
Example :
class ThreadCount threads - Java Interview Questions
java threads what is difference between the Notify and NotifyAll
Learn Servlet Life Cycle
Learn Servlet Life Cycle
Servlet is a Java programming language class... Life Cycle, we would need to cover the entire process starting from its... of Servlet Life Cycle in which servlet container loads the servlet from web.xml
threads
Thread
Thread Write a Java program to create three theads. Each thread should produce the sum of 1 to 10, 11 to 20 and 21to 30 respectively. Main thread....
Java Thread Example
class ThreadExample{
static int
Creation of Multiple Threads
:\nisha>java MultiThread1
Thread Name :main
Thread Name :My...
In this program, two threads are created along with the
"main" thread... of execution of the program, both
threads are registered with the thread scheduler
Bean life cycle in spring
Bean life cycle in spring
This example gives you an idea on how to Initialize
bean in the program... to retrieves the values of the bean using java file. Here in the file given
below i.e.
Java thread
Java thread What method must be implemented by all threads
Java thread
Java thread Can we have run() method directly without start() method in threads
Examples on threads and mulithreading.....
Examples on threads and mulithreading..... Is any good examples on threads and Mulithreading...
Hi Friend,
Please visit the following link:
Thread Tutorial
Thanks
pls tell me the difference between the run() and start() in threads in java....
pls tell me the difference between the run() and start() in threads in java.... difference between the run() and start() in threads in java
Thread Priorities
;
In Java, thread scheduler can use the thread priorities... schedule of threads . Thread gets the ready-to-run state
according... a Java thread is created, it inherits its priority
from the thread
how to create a reminder app using threads in Servlets?
(threads will be required!), a "pop-up window or a web-page should automatically get re-directed!". I have used threads for core java, but never used for Servlets...how to create a reminder app using threads in Servlets? I want
Thread - Java Beginners
Thread creation and use of threads in JAVA Can anyone explain the concept of thread, thread creation and use of threads in JAVA application? Thread creation and use of threads in JAVA Java Resourcehttp
What Is Thread In Java?
execution within a program. Thread in Java follows a life cycle from its...What Is Thread In Java?
In this section we will read about thread in Java... execution and the
description of the example.
Before, defining a Thread in Java
Stateful and Stateless Session Bean Life Cycle
Understanding Stateful and
Stateless Session Bean Life Cycle... Bean Life cycle
There are two stages in the Lifecycle of Stateless
Session Bean... the bean into Does Not
Exist state.
Following Diagram shows the Life cycle
Java Thread
Java Thread Tutorials
In this tutorial we will learn java Threads in detail. The Java Thread class helps the programmer to develop the threaded application in Java. Thread is simple path of execution of a program. The Java Virtual Machine
Java thread
Java thread Why do threads block on I/O? When a thread... and in that time some other thread which is not waiting for that IO gets a chance to execute.If any input is not available to the thread which got suspended for IO Priorities
;
In Java, thread scheduler can use the thread... the execution schedule of threads . Thread gets the ready-to-run state
according...;
When
Thread
? A thread start its life from Runnable state. A thread first enters runnable state...Thread What is multi-threading? Explain different states of a thread.
Java Multithreading
Multithreading is a technique that allows
Thread
Thread Thread Life Cycle
Demon thread
Demon thread What is demon thread? why we need Demon thread?
Daemon threads are the service providers for other threads running... there are daemon thread by killing them abruptly.Any thread can be a daemon thread
Daemon thread - Java Beginners
information, visit the following link: thread Hi,
What is a daemon thread?
Please provide me... thread which run in background. like garbadge collection thread.
Thanks
Chapter 4. Session Bean Life Cycle
Bean Life CycleIdentify correct and incorrect statements or examples about the life cycle of a
stateful or stateless session bean instance...
Chapter 4. Session Bean Life CyclePrev Part I.
Thread scheduling
? Java uses fixed-priority scheduling algorithms to decide which thread... on the basis of their priority relative to other Runnable threads. The thread... is started, Java makes the lower priority thread wait if more than one thread exists
returning a value from Threads
returning a value from Threads Hello I have worker pattern that uses ExecutorService to schedule Thread execution but getting stuck at some point when returning a value using Future.I have code snippet below:
ExecutorService Thread Context
Thread Context
The Thread Context is required by the current thread from
the group of threads to execute. In java this is achieved through the
ThreadContext class
Green Thread - Java Beginners
of Green Thread in java.
Thanks in advance... Hi friend
Green threads are simulated threads within the VM and were used prior to going to a native OS threading model in 1.2 and beyond. Green threads may have had an advantage
System Development Life Cycle (SDLC)
Among all the models System Development Life Cycle (SDLC) Model is one... Life Cycle Model or Linear Sequential Model or Waterfall Method.
This model... and other important documentations. This is the most important phase of the cycle
Running threads in servlet only once - JSP-Servlet
process while mail thread is running. these two separate threads has to run...Running threads in servlet only once Hi All,
I am developing a project with multiple threads which will run to check database continuously
thread dump
thread dump Hi,
I wanted to understand the Locked/waiting state below in the java thread dump. Is it normal to have waiting on locked object... of threads in this state where the waiting on locked values are same. Please Thread : setDaemon() method
Java Thread : setDaemon() method
In this section we are going to describe setDaemon() method with example in java thread.
Daemon Thread :
In Java Thread, daemon threads are used to perform services for user threads.
You can
Thread and Process - Java Beginners
space; a thread doesn't. Threads typically share the heap belonging to their parent..., threads have their own stack space.
This is thread code,
public class...Thread and Process Dear Deepak Sir,
What is the diffrence between
How can combine threads and buttons?
How can combine threads and buttons? I would like to start...");
private JButton jbtStop = new JButton("Stop");
Thread print100 = new Thread(new SiThread());
public TestActionEvent(){
setTitle("TestActionEvent Beginners
java thread PROJECT WORK:
Create a application using thread... .
AccountManager.java
The AccountManager class demonstrates creation of Thread objects using... transfer . Create two threads and initiate the execution of both the threads . Display
Java :Thread setPriority Example
Java :Thread setPriority Example
In this tutorial you will learn how to set thread priority in java thread.
Thread setPriority() :
Thread scheduler uses thread priority concept to assign priority to the
thread. A higher priority
thread
thread Hi
what are the threads required to excute a programe except main threads?
Thanks
kalins naik
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
http://www.roseindia.net/tutorialhelp/comment/94923
|
CC-MAIN-2016-07
|
refinedweb
| 2,528
| 65.83
|
The latest version of the book is P (04-Jul-15)
PDF page: 18
Spent a lot of time getting the sample code to run with Ruby. It seems that the require statement has changed since Ruby 1.9 to not include the local directory. Supposed to use require_relative instead. This was fixed quite a while ago in Ruby since 1.9 due to security issue. Not sure why the code is still using the old format? --Tony Dahbura
- Reported in: P1.0 (24-Dec-15)
Paper page: 22
The rationale for the column value in the random_cell method would be useful to explain. It took me awhile to realize it was for non-rectangular grids, since you didn't just use
column = rand(@columns)--Matt Ray
- Reported in: B7.0 (27-Jun-15)
PDF page: 24
Code examples with line numbers would read more nicely in the kindle version if you label the first line "1" instead of "Line 1".
As it is currently, the extra characters mess up the formatting of the first line in each of the code examples that have line numbers.--Nick Lavers
- Reported in: P1.0 (11-Mar-16)
Paper page: 30
Parameters use the equal sign to assign default values.
So the method declaration should read:
def to_png( cell_size = 10 )--Holger
- Reported in: P1.0 (16-Aug-15)
PDF page: 43
Quoting from the text:
Each iteration of that loop examines all of the linked
neighbors of the current cell (line 8), looking for one that is closer to the root
(line 9).
This, however, does not seem to be (strictly speaking) the case, since the code includes a break at line 12.
I think the result will be the same, since this is a perfect maze, but I was a bit confused by this.
- Reported in: P1.0 (20-Jul-15)
PDF page: 71
Text "(for example, binary_tree_demo.rb or wilsons.rb)". It should read wilsons_demo.rb.--Carl
- Reported in: P1.0 (14-Feb-17)
Paper page: 200
The vertical bias in attributed to the recursive subdivision algorithm is consequence of the implementation rather than the nature of the algorithm itself
The code for the recursive subdivision algorithm (page 200 & online) indicates that a section is split horizontally if & only if the section height is greater than its width, otherwise it it split vertically. As a result, any case in which the section width & height are equal, the section will always be split vertically. The lack of random chance in the case of equality results in biasing the maze to have vertical passages. Changing the code to give square sections a 50/50 chance to split vertically or horizontally will remove the bias.
Subsequently, the comparison of recursive subdivision algorithm to other algorithms in Appendix 2 (pages 260-261), describes & illustrates the bias found in the implementation as given in the text, but would be incorrect if the implementation is fixed as described above.--Jonathan Pikalek
|
https://pragprog.com/titles/jbmaze/errata
|
CC-MAIN-2017-26
|
refinedweb
| 494
| 63.29
|
hi,
I have a problem with jpa on servicemix
when my bundle starts, I created a route builder.
In the constructor of the class, I ask a database to know the route definition to create.
it is only at this point that I know the name of the datasource to use (already declared in osgi).
I have to define jpa consumer who will use this base.
But in the documentation and examples, the percistence JPA is defined in an XML file on bundle META-INF.
I do'nt know how to tell jpa to use the datasource name that I have only on creation of the route builder.
Can I leave the persistence.xml file in the bundle and change the name of the datasource in the road builder?
or should I create a persistenceUnit on the road builder?
and in both cases, how?
I have it in my persistence.xml file:
<jta-data-source>osgi:services/javax.sql.DataSource/(name=aDataSource)</jta-data-source>
but I do not know what name is given to the datasource when starting the bundle
A+JYT
public class MyRouteBuilder extends RouteBuilder {
private String final MY_MODULE_NAME="logistique.livraison";
protected ModuleConfig moduleConfig = null;
public MyRouteBuilder() {
super();
//GetConfiguration from extenal config servivce
moduleConfig = ConfigHelpper.getConfig(MY_BUNDLE_NAME);
logger.log(moduleConfig.getInDatasource());
}
public void configureRoute() throws ExceptionOlympe {
//how to use the value of moduleConfig.getInDatasource() in
// jpa://my.domain.model.Livraison?consumer.delay=2000
from("jpa://my.domain.model.Livraison?consumer.delay=2000")
.bean(...)
...
;
}
Edited by: sekaijin on Sep 12, 2012 4:01 PM
|
https://developer.jboss.org/thread/247465
|
CC-MAIN-2018-13
|
refinedweb
| 256
| 50.23
|
ric@giccs.georgetown.edu
aixterm
wsh,
xwshand
winterm
cmdtooland
shelltool
dtterm
This document is now part of the Linux HOWTO Index and can be found at.
The latest version can always be found in several formats at.
This document supercedes the original howto written by Winfried Trümper.
A static title may be set for any of the terminals
xterm,
color-xterm or
rxvt, by using the
-T and
-n switches:
xterm -T "My XTerm's Title" -n "My XTerm's Icon Title"
Many.
Below.
Often.
Many
It may be useful to write a small program to print an argument to
the title using the
xterm escapes. Some examples are provided
below.
#include <stdio.h> int main (int argc, char *argv[]) { printf("%c]0;%s%c", '\033', argv[1], '\007'); return(0); }
#!/usr/bin/perl print "\033]0;@ARGV\007";
Thanks.
|
http://www.ibiblio.org/pub/Linux/docs/HOWTO/other-formats/html_single/Xterm-Title.html
|
CC-MAIN-2016-44
|
refinedweb
| 140
| 64.41
|
How to Deploy Symfony Apps with Capifony
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95.
How does Capifony work?
Before we start, it’s important to understand how Capifony works. By running the deploy command, Capifony runs certain commands performing different tasks. For example, it will download composer, install the dependencies and clear the cache.
The directory structure is very important. Capifony needs two directories and one symlink. The first directory it needs is called
releases. Every time you deploy, a new directory is created within this directory. Capifony pulls in your git repository and runs all commands on this newly created directory.
The second directory is named
shared. You can imagine that some directories are shared between releases. For instance, if you allow people to upload images, you want to make sure that these files are shared between releases. These directories and files are typically stored in the
shared directory.
Next to these two directories, we have a symlink called
current. This symlink points to the latest successful release. So, when you deploy a new version, a new directory will be created within the
releases directory. If all tasks succeed on this directory, the
current symlink will point to this new version.
You should point your web server to read from this symlink so it always uses the correct, latest version.
Installing Capifony
Let’s cut the theoretic part and dive into deployment. For that, we need to install Capifony. Make sure Ruby is installed on your system before proceeding. You can install the Capifony gem by running
gem install capifony.
Within your application directory, run the command
capifony .. This command will create a file named
Capfile in your root directory and a
deploy.rb in your configuration directory. You will alter the
deploy.rb file during this article, so make sure you have it open in your favorite editor.
Now you have to decide what your deploy strategy will be. Either you choose to let your production server access your SCM (Source Control Management) or your local computer pulls in your repository from the SCM and copies it to your production server.
Within this article, we will look at the first strategy. If you are interested in the second strategy, have a look at the official Capifony website for instructions.
Configure your project
I am going to use this project and deploy it to a production server. I got the application checked out on my machine, so it’s time to run
capifony ..
$ capifony . [add] writing './Capfile' [add] writing './app/config/deploy.rb' [done] symfony 2 project capifonied!
When you open up the
deploy.rb script, you will see the following content.
set :application, "set your application name here" set :domain, "#{application}.com" set :deploy_to, "/var/www/#{domain}" set :app_path, "app" set :repository, "#{domain}:/var/repos/#{application}.git" set :scm, :git # Or: `accurev`, `bzr`, `cvs`, `darcs`, `subversion`, `mercurial`, `perforce`, or `none` set :model_manager, "doctrine" # Or: `propel` role :web, domain # Your HTTP server, Apache/etc role :app, domain, :primary => true # This may be the same as your `Web` server set :keep_releases, 3 # Be more verbose by uncommenting the following line # logger.level = Logger::MAX_LEVEL
It’s time to change this configuration file. We start off with the top 4 parameters. First off, we define what the name of our application is, what the domain to deploy to is, what the directory will be and where the app path is. If you are using the default Symfony setup, the app path will already be configured correctly. So far my configuration looks like this.
set :application, "Jumph" set :domain, "peternijssen.nl" set :deploy_to, "/srv/www/jumph" set :app_path, "app"
Let’s configure our repository. Since we are using a git repository, we should set the SCM to git and point the repository to our Github repository.
set :repository, "git@github.com:jumph-io/Jumph.git" set :scm, :git
Up next we define our model manager. In my case I am using Doctrine, but if you are using Propel, you should change the configuration value to “propel”.
We can skip the roles. They are just reusing your domain.
The last setting is the
keep_releases setting. With this setting, you can define how many releases you may want to keep, allowing you to rollback to a previous version.
So far, we changed all the default config variables to the correct values. However, Symfony requires more configuration to be deployed. In my case, I am both using Assetic as well as Composer. This means I have to add the following settings to the file.
set :dump_assetic_assets, true set :use_composer, true
Learn PHP for free!
Make the leap into server-side programming with a comprehensive cover of PHP & MySQL.
Normally
RRP $11.95 Yours absolutely free
Now we need to configure the shared files. For example, your
parameters.yml should be shared between every release. Next to that, it’s wise to also share your uploads, your logs and your vendor between releases. If you are not sharing your vendor between every release, it means your deploy is installing all vendors every single time. To set these shared paths, we just add the following configuration.
set :shared_files, ["app/config/parameters.yml"] set :shared_children, [app_path + "/logs", web_path + "/uploads", "vendor", app_path + "/sessions"]
Note that in my case I also moved the session directory outside the cache directory. This way I am able to share this directory between releases and nobody gets logged out on a new deploy. Do note you need to change the configuration within Symfony also to reflect this change.
session: save_path: "%kernel.root_dir%/sessions/"
Configure your server
So far everything is ready for our Symfony application. Now it’s time to configure everything for our server. We do this within the same config file as above.
When deploying, Capifony runs as root. If you prefer to run it as your own user, you can add the following lines to your configuration.
set :use_sudo, false set :user, "peter"
It’s also important to make sure your web server user is able to write to certain directories. This can be done by adding the following settings.
set :writable_dirs, ["app/cache", "app/logs", "app/sessions"] set :webserver_user, "www-data" set :permission_method, :acl set :use_set_permissions, true
Note: You might need to install certain packages on your server. For more information regarding permissions, please check this page.
Now we can tell Capifony to prepare the directories on your server. We can do this by running
cap deploy:setup. Do make sure you have SSH access to the server and the directory is writable by your user of course.
Note: In my case I had to add
default_run_options[:pty] = true to my configuration due to a known issue.
After the command has been run, you will notice it created the
releases and
shared directories on your server.
Now you should be able to deploy by running
cap deploy on your command line. If you bump into any problems, you can add the following line to your configuration file, to get more information about the error.
logger.level = Logger::MAX_LEVEL
In my case, I was unable to access the git repository due to an invalid public key. Since my local computer can access the repository, I just had to forward my SSH agent to the server. This can be done by adding the following line to the deploy script.
ssh_options[:forward_agent] = true
Since it’s your first deployment, Capifony will ask you for the credentials of the
parameters.yml file. You only have to fill them in once, since we configured the file to be shared across releases.
Adding additional commands
If you tried to deploy the repository I mentioned earlier, you will notice it fails when Assetic tries to dump it’s files. This is due to the fact that I am managing my JavaScipt and CSS dependencies through Bower. So before Assetic dumps the files, I should first run
bower install.
Capifony by default has no support for bower, so we have to expand the list of tasks that Capifony performs. We add an additional task by adding it to the configuration file.
before 'symfony:assetic:dump', 'bower:install' namespace :bower do desc 'Run bower install' task :install do capifony_pretty_print "--> Installing bower components" invoke_command "sh -c 'cd #{latest_release} && bower install'" capifony_puts_ok end end
The task is quite easy to understand. First we define when the task should run. In this case, we want to run it before Assetic dumps its files. Next we define which task it should run.
The last thing we need to do is to define this new task. We do so by creating a task within a namespace and write down which command to run. In this task, we first make sure we are in the correct directory and then run
bower install.
Additionally, I added some output before and after the command. This way, it will show up in the
cap deploy command when running. It gives you some extra feedback as you can see below.
$ cap deploy --> Updating code base with checkout strategy --> Creating cache directory................................✔ --> Creating symlinks for shared directories................✔ --> Creating symlinks for shared files......................✔ --> Normalizing asset timestamps............................✔ --> Downloading Composer....................................✔ --> Installing Composer dependencies........................✔ --> Installing bower components.............................✔ --> Dumping all assets to the filesystem....................✔ --> Warming up cache........................................✔ --> Clear controllers.......................................✔ --> Setting permissions.....................................✔ --> Successfully deployed!
Additional commands
In the beginning, we decided to keep at least 3 releases. If something should go wrong in the new release, you can rollback by running the command
cap deploy:rollback.
Additionally you can also activate or deactivate the Symfony maintenance page by either running
cap deploy:web:disable or
cap deploy:web:enable.
Capifony consists of more commands that might come in handy. For a full list you can run
cap -vT.
Complete configuration
As a reference, this is the complete configuration file which we created through this article.
set :application, "Jumph" set :domain, "peternijssen.nl" set :deploy_to, "/srv/www/jumph" set :app_path, "app" set :repository, "git@github.com:jumph-io/Jumph.git" set :scm, :git set :model_manager, "doctrine" role :web, domain role :app, domain, :primary => true set :use_sudo, false set :user, "peter" set :keep_releases, 3 set :dump_assetic_assets, true set :use_composer, true set :shared_files, ["app/config/parameters.yml"] set :shared_children, [app_path + "/logs", web_path + "/uploads", "vendor", app_path + "/sessions"] set :writable_dirs, ["app/cache", "app/logs", "app/sessions"] set :webserver_user, "www-data" set :permission_method, :acl set :use_set_permissions, true ssh_options[:forward_agent] = true default_run_options[:pty] = true before 'symfony:assetic:dump', 'bower:install' namespace :bower do desc 'Run bower install' task :install do capifony_pretty_print "--> Installing bower components" invoke_command "sh -c 'cd #{latest_release} && bower install'" capifony_puts_ok end end
Conclusion
Capifony makes your life easier if it comes to deploying your Symfony application. Although we have already seen a lot of options Capifony offers, you might want to dig deeper. You can check out their website for more information. Are you using Capifony to deploy your Symfony applications? Let us know if you run into any difficulties or have any questions!
Learn valuable skills with a practical introduction to Python programming!
Give yourself more options and write higher quality CSS with CSS Optimization Basics.
|
https://www.sitepoint.com/deploy-symfony-apps-capifony/
|
CC-MAIN-2020-34
|
refinedweb
| 1,851
| 58.18
|
Writing Pipeline Components
This page describes common information needed to write generators, transformers and serializers.
Using the sitemap, you describe how pipelines should be assembled. The resulting pipeline always has the following structure: one generator, zero or more transformers, and one serializer. The only exception to this rule are readers, which implicitly combine the generation and serialization step and thus also prevent you from declaring transformation steps (within that pipeline). The generator could also be an aggregator, which is just a special type of generator.
These components (generator, transformer(s) and serializer) are put together in what is usually called a "SAX pipeline". In this pipeline, SAX events are propagated. SAX-events are events corresponding to what appears in an XML document: "start element", "characters" and "end element" being the most important. A generator will start generating SAX events. The transformer that follows the generator will receive these events and can react on them, producing new SAX-events itself. The SAX-events generated by the transformer will then be received by the next transformer, which will then again start generating new events, and so on. At the end, the SAX-events are consumed by the serializer which will convert the SAX-events to some binary stream. This could be an XML document, but also a HTML document, an image, or a PDF file. The following picture shows this concept.
While Cocoon sets up the pipeline, it will assign a "consumer" to each generator and transformer. The consumer is where the SAX-events should be send to, thus the next component in the pipeline. The consumer of the generator will be the first transformer, the consumer of the first transformer will be the second transformer, and so on. Finally the consumer of the last transformer will be the serializer. The serializer itself does not have a consumer, but an output stream to which it can write its bytes. All the consumers must implement the SAX "ContentHandler" interface. This interface contains methods for each of the SAX-events. The SAX-events are then propagated in the pipeline by calling ContentHandler-methods on the consumer (= the next component in the pipeline).
To give an idea of the different SAX-events, the ContentHandler interface is displayed here: qName, Attributes atts) throws SAXException; public void endElement (String namespaceURI, String localName, String qName); }
Note that there is also another SAX event interface, the LexicalHandler. That interface is used to pass on less important information about XML, such as comments and CDATA sections. It is however not used very often.
|
http://wiki.apache.org/cocoon/WritingPipelineComponents?highlight=%28%28WritingReaders%29%29
|
CC-MAIN-2016-36
|
refinedweb
| 423
| 54.73
|
Tasks¶
Tasks are where the execution takes place. Tasks depend on each other and output targets.
An outline of how a task can look like:
Task.requires¶.
Requiring another Task¶
Note that
requires() can not return a
Target object.
If you have a simple Target object that is created externally
you can wrap it in a Task class like this:
class LogFiles(luigi.ExternalTask): def output(self): return luigi.contrib.hdfs.HdfsTarget('/log')
This also makes it easier to add parameters:
class LogFiles(luigi.ExternalTask): date = luigi.DateParameter() def output(self): return luigi.contrib.hdfs.HdfsTarget(self.date.strftime('/log/%Y-%m-%d'))
Task.output¶
The
output() method returns one or more
Target objects.
Similarly to requires, you can return them wrapped up in any way that’s convenient for you.
However we recommend that any
Task only return one single
Target in output.
If multiple outputs are returned,
atomicity will be lost unless the
Task itself can ensure that each
Target is atomically created.
(If atomicity is not of concern, then it is safe to return multiple
Target objects.)
class DailyReport(luigi.Task): date = luigi.DateParameter() def output(self): return luigi.contrib.hdfs.HdfsTarget(self.date.strftime('/reports/%Y-%m-%d')) # ...
Task.run¶
The
run() method now contains the actual code that is run.
When you are using Task.requires and Task.run Luigi breaks down everything into two stages.
First it figures out all dependencies between tasks,
then it runs everything.
The
input() method is an internal helper method that just replaces all Task objects in requires
with their corresponding output.
An example:
class GenerateWords(luigi.Task): def output(self): return luigi.LocalTarget('words.txt') def run(self): # write a dummy list of words to output file words = [ 'apple', 'banana', 'grapefruit' ] with self.output().open('w') as f: for word in words: f.write('{word}\n'.format(word=word)) class CountLetters(luigi.Task): def requires(self): return GenerateWords() def output(self): return luigi.LocalTarget('letter_counts.txt') def run(self): # read in file as list with self.input().open('r') as infile: words = infile.read().splitlines() # write each word to output file with its corresponding letter count with self.output().open('w') as outfile: for word in words: outfile.write( '{word} | {letter_count}\n'.format( word=word, letter_count=len(word) ) )
Task.input¶
As seen in the example above,
input() is a wrapper around Task.requires that
returns the corresponding Target objects instead of Task objects.
Anything returned by Task.requires will be transformed, including lists,
nested dicts, etc.
This can be useful if you have many dependencies:
class TaskWithManyInputs(luigi.Task): def requires(self): return {'a': TaskA(), 'b': [TaskB(i) for i in xrange(100)]} def run(self): f = self.input()['a'].open('r') g = [y.open('r') for y in self.input()['b']]
Dynamic dependencies¶
Sometimes you might not know exactly what other tasks to depend on until runtime.
In that case, Luigi provides a mechanism to specify dynamic dependencies.
If you yield another
Task in the Task.run method,
the current task will be suspended and the other task will be run.
You can also yield a list of tasks.
class MyTask(luigi.Task): def run(self): other_target = yield OtherTask() # dynamic dependencies resolve into targets f = other_target.open('r')
This mechanism is an alternative to Task.requires in case you are not able to build up the full dependency graph before running the task. It does come with some constraints: the Task.run method will resume from scratch each time a new task is yielded. In other words, you should make sure your Task.run method is idempotent. (This is good practice for all Tasks in Luigi, but especially so for tasks with dynamic dependencies).
For an example of a workflow using dynamic dependencies, see examples/dynamic_requirements.py.
Task status tracking¶
For long-running or remote tasks it is convenient to see extended status information not only on the command line or in your logs but also in the GUI of the central scheduler. Luigi implements dynamic status messages, progress bar and tracking urls which may point to an external monitoring system. You can set this information using callbacks within Task.run:
class MyTask(luigi.Task): def run(self): # set a tracking url self.set_tracking_url("http://...") # set status messages during the workload for i in range(100): # do some hard work here if i % 10 == 0: self.set_status_message("Progress: %d / 100" % i) # displays a progress bar in the scheduler UI self.set_progress_percentage(i)
Events and callbacks¶
Luigi has a built-in event system that allows you to register callbacks to events and trigger them from your own tasks. You can both hook into some pre-defined events and create your own. Each event handle is tied to a Task class and will be triggered only from that class or a subclass of it. This allows you to effortlessly subscribe to events only from a specific class (e.g. for hadoop jobs).
@luigi.Task.event_handler(luigi.Event.SUCCESS) def celebrate_success(task): """Will be called directly after a successful execution of `run` on any Task subclass (i.e. all luigi Tasks) """ ... @luigi.contrib.hadoop.JobTask.event_handler(luigi.Event.FAILURE) def mourn_failure(task, exception): """Will be called directly after a failed execution of `run` on any JobTask subclass """ ... luigi.run()
But I just want to run a Hadoop job?¶
The Hadoop code is integrated in the rest of the Luigi code because
we really believe almost all Hadoop jobs benefit from being part of some sort of workflow.
However, in theory, nothing stops you from using the
JobTask class (and also
HdfsTarget)
without using the rest of Luigi.
You can simply run it manually using
MyJobTask('abc', 123).run()
You can use the hdfs.target.HdfsTarget class anywhere by just instantiating it:
t = luigi.contrib.hdfs.target.HdfsTarget('/tmp/test.gz', format=format.Gzip) f = t.open('w') # ... f.close() # needed
Task priority¶
The scheduler decides which task to run next from the set of all tasks that have all their dependencies met. By default, this choice is pretty arbitrary, which is fine for most workflows and situations.
If you want to have some control on the order of execution of available tasks,
you can set the
priority property of a task,
for example as follows:
# A static priority value as a class constant: class MyTask(luigi.Task): priority = 100 # ... # A dynamic priority value with a "@property" decorated method: class OtherTask(luigi.Task): @property def priority(self): if self.date > some_threshold: return 80 else: return 40 # ...
Tasks with a higher priority value will be picked before tasks with a lower priority value. There is no predefined range of priorities, you can choose whatever (int or float) values you want to use. The default value is 0.
Warning: task execution order in Luigi is influenced by both dependencies and priorities, but in Luigi dependencies come first. For example: if there is a task A with priority 1000 but still with unmet dependencies and a task B with priority 1 without any pending dependencies, task B will be picked first.
Namespaces, families and ids¶
In order to avoid name clashes and to be able to have an identifier for tasks, Luigi introduces the concepts task_namespace, task_family and task_id. The namespace and family operate on class level meanwhile the task id only exists on instance level. The concepts are best illustrated using code.
import luigi class MyTask(luigi.Task): my_param = luigi.Parameter() task_namespace = 'my_namespace' my_task = MyTask(my_param='hello') print(my_task) # --> my_namespace.MyTask(my_param=hello) print(my_task.get_task_namespace()) # --> my_namespace print(my_task.get_task_family()) # --> my_namespace.MyTask print(my_task.task_id) # --> my_namespace.MyTask_hello_890907e7ce print(MyTask.get_task_namespace()) # --> my_namespace print(MyTask.get_task_family()) # --> my_namespace.MyTask print(MyTask.task_id) # --> Error!
The full documentation for this machinery exists in the
task module.
Instance caching¶
In addition to the stuff mentioned above,
Luigi also does some metaclass logic so that
if e.g.
DailyReport(datetime.date(2012, 5, 10)) is instantiated twice in the code,
it will in fact result in the same object.
See Instance caching for more info
|
https://luigi.readthedocs.io/en/stable/tasks.html
|
CC-MAIN-2018-34
|
refinedweb
| 1,335
| 51.65
|
Solution for Error in using swap() STL while implementing Queue using two Stacks
is Given Below:
I was trying to implement Class Queue using Two Stacks and I noticed an error. If I swap both the stacks(after pop and front operation), using swap() operation of STL, I am getting wrong answer but when I swap both the stacks using static code, then I am getting the correct answer.
Please let me know what exactly is happening.
#include <bits/stdc++.h> using namespace std; template<typename T> class Queue{ stack<T> A;//non empty stack stack<T> B; public: void push(T x){//O(1) A.push(x); }; void pop(){//O(n) if(A.empty()) return; while(A.size()>1){ T element=A.top(); A.pop(); B.push(element); } //1 element remaining in A A.pop(); //swap(A,B);//so that A is the non empty stack //using swap() in this case was giving wrong answer while(!B.empty()){ T element=B.top(); B.pop(); A.push(element); } }; int front(){//O(n) while(A.size()>1){ T element=A.top(); A.pop(); B.push(element); } T element = A.top(); A.pop(); B.push(element); while(!B.empty()){ T element = B.top(); B.pop(); A.push(element); } return element; }; int size(){ return A.size()+B.size(); }; bool empty(){ return size()==0; }; }; int main() { Queue<int> q; q.push(2); q.push(3); q.push(4); q.pop(); q.push(15); while(!q.empty()) { cout<<q.front()<<" "; q.pop(); } return 0; }
One more thing, I implemented Stack using two Queues before this, and there swap() was giving correct answer.
When you swap your stacks manually, you do it by taking one element off of the top of one stack and putting it on top of the other. This will reverse the order of elements in the stack.
When you use
std::swap to swap the two stacks, the current order is preserved. The end result is that the order in the stack is reversed. By using another manual “swap”, you re-reverse all the elements and the original order is restored (but with the bottom element removed, since you don’t add it to the
B stack during the swap).
|
https://codeutility.org/error-in-using-swap-stl-while-implementing-queue-using-two-stacks/
|
CC-MAIN-2021-49
|
refinedweb
| 368
| 66.54
|
Who doesn't know the Outlook Navigation Pane on the left side of the Outlook screen? Do you want to use the Navigation Pane in your project for free? Read on.
There is a lot of junk on the .NET market. Small study room projects being sold as large toolkits providing a bunch of crappy controls for a lot of money. To extend my portfolio and to give the open source community something back, I decided to make this control available free of charge. This control is also a note to control vendors, take your customers serious and provide valuable controls and support for their money. So enough preaching, let's get cracking.
I decided to create this control for Windows Forms to show the potential of Windows Forms. Windows Forms is not dead, because it performs betters and is responsive. WPF has a way to go. Besides that, you need a designer to design your controls, otherwise you'll end up with a Christmas tree.
In this section, I'll briefly explain the basic objects of the Navigation Pane.
The Navigation Pane consist of bands, buttons and groups. The bands (2) are the basic object the Pane exists of. Each band is one to one linked to a button (1). The band is the container that comes to the front when the user clicks on one of the buttons. The band contains the other controls like the group (3). The bands can be expanded with groups which can be compared to the groups in Outlook. The Group is a control container which can be collapsed and expanded.
A button has three states, normal, when it's hovered by the mouse and when it's the active button or when the button is down.
A button is one to one linked to band. If a band appears as part of the control, a new button appears as well. It's not possible to add buttons without adding a band.
A group have two states, either it's expanded and all child controls are visible or it's collapsed and only the header is visible.
The main class of the control (NaviBar) contains barely more than two collections containing the buttons and the bands. The most important intelligence is in the layout classes. The only thing the main class does is deciding when the layout process should be invoked. This is the layout engine principle. Explained in more depth here on MSDN.
NaviBar
The layout process decides what control should be visible, active and at what position, etc. The advantage of this principle is that it’s really easy to implement a different layout without too much hassle, because the main control only knows that there is an layout but what it exactly does, he doesn't care either. As long as it looks nice, like the toolbox in Visual Studio or like the Navigation pane in Outlook, or some other fancy view.
Separate classes are responsible for drawing the controls. The main class delegates like a lazy manager. The renderers are only responsible for drawing the objects, the colours are coming from separate classes, the colortables. If you are in a good mood, you can use flashy bright colors with a few lines of coding:
NaviButtonRenderer.ColorTable = FlashyColorTable
This elegant principle is ripped away from the default .NET menus which are coming with Visual Studio.
Here is a brief list of the features. To see the whole package, see the demo project.
It’s possible to present the Navigation Pane in compact mode. Click on one of the small buttons on the top right of the header of the control. You can also collapse the Navigation Pane by setting the NaviBar.Collapsed property to true. If the end-user clicks on the collapsed bar, the band will popup on top of the other controls.
NaviBar.Collapsed
true
The Navigation Pane comes in six different flavors. Office 2007 is the main layout and for those who are stuck in the past, there is the Office 2003 look and feel. If you want to create a new colourstyle, you need to make an override of the NaviColorTable class.
NaviColorTable
This Navigation Pane is among the few who do support Right to left layout. Not much work when the layout has been defined in separate classes. It’s possible that not all features are like they should be. If one of the Arabics would like to test it and make me happy with the results?
When there is not enough space for the small overflow buttons, the buttons will appear as menuitems:
It’s possible for the end user to customize the layout. Like the ordering of the buttons or the visibility of the buttons. It’s possible to save the layout settings in an external data file. The details which are saved are the ordering of the buttons, the visibility of the buttons, the amount of visible buttons, the position of the splitter and whether the Pane is collapsed or not.
The settings are stored in the Settings property in the NaviBar class. To save these settings to a file, add a reference to Guifreaks.NavigationBar.XmlSerializers.dll. This DLL uses XML Serialization and is generated with sgen.exe for performance considerations. You as a developer are responsible for saving the settings to a data file of your preference. Here is a sample of saving the settings to an XML file.
Settings
if (saveFileDialogSettings.ShowDialog() == DialogResult.OK)
{
try
{
string fileName = saveFileDialogSettings.FileName;
NaviBarSettingsSerializer serial = new NaviBarSettingsSerializer();
using (TextWriter w = new StreamWriter(fileName))
{
serial.Serialize(w, naviBar1.Settings);
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
You can use the same DLL to load the settings. After loading the settings, use the ApplySettings method in the NaviBar class to activate the settings.
ApplySettings
NaviBar
if (openFileDialogSettings.ShowDialog() == DialogResult.OK)
{
try
{
string fileName = openFileDialogSettings.FileName;
NaviBarSettingsSerializer serial = new NaviBarSettingsSerializer();
using (StreamReader reader = new StreamReader(fileName))
{
NaviBarSettings settings = serial.Deserialize(reader) as NaviBarSettings;
if (settings != null)
{
naviBar1.Settings = settings;
naviBar1.ApplySettings();
}
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
The integration with Visual Studio is mostly the one that comes at the end of the list, the one I rather forget. But it's very important. Integration with the document outline window for example. Other design time features are clicking on the button activates the band and brings it to the front.
The Control comes with an installer which automatically adds a tab and toolboxitems in Visual Studio. If you want to have an installable package, go to guifreaks.net.
Here are a few basic code samples that should get you up running fast.
While you are into the designer, you can select the group from the toolbox and drag it into the band. Here is a code sample when you want to add a group from the source:
NaviGroup group = new NaviGroup();
group.Text = "test";
group.Expanded = true;
group.Dock = DockStyle.Top;
naviBand3.Controls.Add(group);
You can add a new band to the Navigation Pane using the NaviBar.Bands property. The small image appears when the button is presented in a horizontal way on the bottom. The large image appears when the buttons are presented horizontally as large buttons.
NaviBar.Bands
NaviBand band = new NaviBand();
band.Text = "NaviBand" + (naviBar1.Bands.Count + 1).ToString();
band.SmallImage = Properties.Resources.bookmark1;
band.LargeImage = Properties.Resources.bookmark;
naviBar1.Bands.Add(band);
The layout can be set using the NaviBar.LayoutStyle property.
NaviBar.LayoutStyle
naviBar1.LayoutStyle = NaviLayoutStyle.Office2003Silver;
Making a particular band the active band is fairly simple:
naviBar1.ActiveBand = naviBar1.Bands[0];
I recommend using the latest release and the installer. The latest releases installer and source code can be found at.
This article, along with any associated source code and files, is licensed under The Creative Commons Attribution-ShareAlike 2.5 License
namespace Guifreaks.Navisuite
{
[ToolboxItem(false)]
public class NaviCheckedListBox : CheckedListBox
{
int checkBoxWidth = 18;
//bool checkHandled = false;
bool itemChecked;
protected override void OnItemCheck(ItemCheckEventArgs ice)
{
base.OnItemCheck(ice);
//if (checkHandled)
//{
itemChecked = !GetItemChecked(ice.Index);
if (itemChecked)
ice.NewValue = CheckState.Checked;
else
ice.NewValue = CheckState.Unchecked;
//}
}
protected override void OnMouseDown(MouseEventArgs e)
{
base.OnMouseDown(e);
//checkHandled = false;
if (e.Button == MouseButtons.Left)
{
int index = -1;
for (int i = 0; i < Items.Count; i++)
{
if (GetItemRectangle(i).Contains(e.X, e.Y))
{
index = i;
break;
}
}
if (index != -1 && e.X <= checkBoxWidth)
{
itemChecked = !GetItemChecked(index);
//checkHandled = true;
SetItemChecked(index, itemChecked);
}
}
}
}
}
foreach (NaviBand band in naviBar1.Bands)
band.VisibleChanged += new EventHandler(band_VisibleChanged);
void band_VisibleChanged(object sender, EventArgs e)
{
if (!((NaviBand)sender).Visible)
naviBar1.ActiveBand = naviBar1.Bands[0];
}
public int IndexOf(NaviBand value)
{
return ((IList)innerList).IndexOf(value);
}
if (naviBar1.NaviLayoutEngine != null && naviBar1.NaviLayoutEngine is NaviLayoutEngineOffice)
{
((NaviLayoutEngineOffice)naviBar1.NaviLayoutEngine).ClosePopup();
}
int i = 2;
foreach (Category categoria in categorias) {
NaviBand band = new NaviBand();
band.Text = categoria.nome;
band.TabIndex = i;
i++;
nav.Controls.Add(band);
}
nav.Bands.Sort(new NaviBandOrderComparer());
Order
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/43181/A-Serious-Outlook-Style-Navigation-Pane-Control?msg=4498536
|
CC-MAIN-2015-18
|
refinedweb
| 1,506
| 50.84
|
SPOPS::Manual::Object - Shows how you interact with SPOPS objects.
This section of the SPOPS manual should be of interest to users and developers, since it describes how SPOPS objects are used. Note that all examples here assume the SPOPS class has already been created -- for more on this see SPOPS::Manual::Configuration and SPOPS::Manual::CodeGeneration for more information about that process.
How better to start off than a simple example. Here we get values from CGI.pm, set the values into a new SPOPS object and save it:
1: my $q = new CGI; 2: my $obj = MyUserClass->new(); 3: foreach my $field ( qw( f_name l_name birthdate ) ) { 4: $obj->{ $field } = $q->param( $field ); 5: } 6: my $object_id = eval { $obj->save }; 7: if ( $@ ) { 8: ... report error information ... 9: } 10: else { 11: warn " Object saved with ID: $obj->{object_id}\n"; 12: }
You can then display this object's information from a later request:
1: my $q = new CGI; 2: my $object_id = $q->param( 'object_id' ); 3: my $obj = MyUserClass->fetch( $object_id ); 4: print "First Name: $obj->{f_name}\n", 5: "Last Name: $obj->{l_name}\n", 6: "Birthday: $obj->{birthdate}\n";
To display other information from the same object, like related objects:
1: my $user_group = $obj->group; 2: print "Group Name: $user_group->{name}\n";
And you can fetch batches of objects at once based on arbitrary criteria:
1: my $q = new CGI; 2: my $last_name = $q->param( 'last_name' ); 3: my $user_list = MyUserClass->fetch_group({ where => 'l_name LIKE ?', 4: value => [ "%$last_name%" ], 5: order => 'birthdate' }); 6: print "Users with last name having: $last_name\n"; 7: foreach my $user ( @{ $user_list } ) { 8: print " $user->{f_name} $user->{l_name} -- $user->{birthdate}\n"; 9: }
This version of SPOPS uses a tie interface to get and set the individual data values. You can also use the more traditional OO
get and
set operators, but most people will likely find the hashref interface easier to deal with. It also means you can interpolate data into strings: bonus!
The tie interface allows the most common operations -- fetch data and put it into a data structure for later use -- to be done very easily. It also hides much of the complexity behind the object for you so that most of the time you are dealing with a simple hashref.
However, the tie interface also allows us to give behaviors to the SPOPS object that are executed transparently with every get or set of a value. For instance, if you use strict field checking (example below), we can catch any property name misspellings or wrong names being used for properties. We can also track property state as necessary so we can know whether an object has changed or not since it was created or fetched. Property values can also be lazy-loaded.
In addition to getting the data for an object through the hashref method, you can also get to the data with accessors named after the fields.
For example, given the fields:
$user->{f_name} $user->{l_name} $user->{birthday}
You can call the following to retrieve the data:
$user->f_name(); $user->l_name(); $user->birthday();
And to the following to modify the data:
$user->f_name( 'Ferris' ); $user->l_name( 'Bueller' ); $user->birthday( '1970-02-14' );
Since the accessor and mutator share a method the mutator needs to know whether to do its job. It does this by testing the first parameter passed in for definedness. Most of the time this is fine, but what happens when you want to clear out a value like this?
$user->m_name( undef );
This won't do what you think -- since the first parameter is undefined it will simply act as an accessor.
To clear a value, call instead the '_clear' method associated with a fieldname:
$user->m_name_clear;
This explicitly sets the value to undef.
SPOPS accomplishes this using AUTOLOAD, and after the first call it automatically creates a subroutine in the namespace of your class to catch successive calls. If you require you can modify how these two methods get created by overriding
_internal_create_field_methods(). This takes three arguments: the object being modified, the class to install the routines into, and the fieldname used to create the methods. Don't implement this unless you know what you're doing -- check out the implementation in SPOPS before proceeding down this path, since doing it wrong could create some nasty side-effects.
The object tracks whether any changes have been made since it was instantiated and keeps an internal toggle switch. You can query the toggle or set it manually.
$obj->changed();
Returns 1 if there has been change, undef if not.
$obj->has_change();
Sets the toggle to true.
$obj->clear_change();
Sets the toggle to false.
Example:
if ( $obj->changed() ) { my $rv = $obj->save(); }
Note that this can (and should) be implemented within the subclass, so you as a user can simply call:
$obj->save();
And not worry about whether it has been changed or not. If there has been any modification, the system will save it, otherwise it will not.
As of SPOPS 0.53, SPOPS::DBI supports multi-field primary keys. To use it, you just use an arrayref to represent the ID field in the
id() method rather than a string. (Wisenheimers who use an arrayref with one element may be shocked that SPOPS finds this attempt to trick it and sets the value to the single element.)
When using
fetch(), you need to represent the ID as a comma-separated string similar to that returned by
id() in scalar context (see below). For example:
# Configuration myclass => { class => 'My::Customer', id => [ 'entno', 'custno' ], ... }, # Fetch object my $cust = My::Customer->fetch( "$entno,$custno" );
On finding multiple ID fields, SPOPS::ClassFactory::DBI creates new methods for
id(),
id_field and
id_clause. Both
id() and
id_field() are context-sensitive, and
id_clause() returns a clause with multiple atoms.
One at a time:
id( [ $id_value ] )
In list context, returns the values for the ID fields in order. In scalar context, returns the ID values joined by a comma. (This may be configurable in the future.)
my ( $id_val1, $id_val2 ) = $object->id(); my $id_string = $object->id(); $object->id( [ 'value1', 'value2' ] );
id_field()
In list context, returns an n-element list with the ID fieldnames. In scalar context, returns the fieldnames joined by a comma. (This may be configurable in the future.)
my ( $field1, $field2 ) = $object->id_field(); my $field_string = $object->id_field();
id_clause()
Returns a full WHERE clause to find this particular record -- used in UPDATE and DELETE statements. If you're using as a class method, you need to pass in the ID values as an arrayref or as a comma-separated string as returned by
id() in scalar context.
my $where = $obj->id_clause(); my $sql = "SELECT * FROM foo WHERE $where"; my $where = $obj_class->id_clause( [ $id_val1, $id_val2 ] ); my $sql = "SELECT * FROM foo WHERE $where"; my $where = $obj_class->id_clause( "$id_val1,$id_val2" ); my $sql = "SELECT * FROM foo WHERE $where";
As of version 0.40, SPOPS supports lazy loading of objects. This means you do not have to load the entire object at once.
To use lazy loading, you need to specify one or more 'column groups', each of which is a logical grouping of properties to fetch. Further, you need to specify which group of properties to fetch when you run a 'fetch' or 'fetch_group' command. SPOPS will fetch only those fields and, as long as your implementing class has a subroutine for performing lazy loads, will load the other fields only on demand.
For example, say we have an object representing an HTML page. One of the most frequent uses of the object is to participate in a listing -- search results, navigation, etc. When we fetch the object for listing, we do not want to retrieve the entire page -- it is hard on the database and takes up quite a bit of memory.
So when we define our object, we define a column group called 'listing' which contains the fields we display when listing the objects:
1: $spops = { 2: html_page => { 3: class => 'My::HTMLPage', 4: isa => [ qw/ SPOPS::DBI::Pg SPOPS::DBI / ], 5: field => [ qw/ page_id location title author content / ], 6: column_group => { listing => [ qw/ location title author / ] }, 7: ... 8: }, 9: };
And when we retrieve the objects for listing, we pass the column group name we want to use:
1: my $page_list = My::HTMLPage->fetch_group({ order => 'location', 2: column_group => 'listing' });
Now each object in
\@page_list has the fields 'page_id', 'location', 'title' and 'author' filled in, but not 'content', even though 'content' is defined as a field in the object. The first time we try to retrieve the 'content' field, SPOPS will load the value for that field into the object behind the scenes.
1: 2: foreach my $page ( @{ $page_list } ) { 3: 4: # These properties are in the fetched object and are not 5: # lazy-loaded 6: 7: print "Title: $page->{title}\n", 8: "Author: $page->{author}\n"; 9: 10: # When we access lazy-loaded properties like 'content', SPOPS goes 11: # and retrieves the value for each object property as it's 12: # requested. 13: 14: if ( $title =~ /^OpenInteract/ ) { 15: print "Content\n\n$page->{content}\n"; 16: } 17: }
Obviously, you want to make sure you use this wisely, otherwise you will put more strain on your database than if you were not using lazy loading. The example above, for instance, is a good use since we might be using the 'content' property for a few objects. But it would be a poor use if we did not have the
if statement or if every 'title' began with 'OpenInteract' since the 'content' property would be retrieved anyway.
See SPOPS::Manual::Serialization for how to implement lazy loading for your objects.
As of version 0.50, SPOPS has the ability to make an object look like another object, or to put a prettier face on existing data.
In your configuration, just specify:
field_map => { new_name => 'existing_name', ... }
For example, you might need to make your user objects stored in an LDAP directory look like user objects stored in a DBI database. You could say:
1: field_map => { 'last_name' => 'sn', 2: 'first_name' => 'givenname', 3: 'password' => 'userpassword', 4: 'login_name' => 'uid', 5: 'email' => 'mail', 6: 'user_id' => 'cn' }
So, despite having entirely different schemas, the following would print out equivalent information:
1: sub display_user_data { 2: my ( $user ) = @_; 3: return <<INFO; 4: ID: $user->{user_id} 5: Name: $user->{first_name} $user->{last_name} 6: Login: $user->{login_name} 7: Email: $user->{email} 8: INFO 9: } 10: 11: print display_user_data( $my_ldap_user ); 12: print display_user_data( $my_dbi_user );
Another use might be to represent properties in a different language.
Note that you can have more than one new field pointing to the same old field.
In some implementations (notably SPOPS::DBI), you can alter the value of a field before it gets set in the object. This can be a useful (if sometimes non-portable) way of doing transparent data formatting for all objects. And this method is usually faster than just using Perl, which is an added bonus.
For instance, maybe you're using MySQL and you want to take advantage of its date-formatting capabilities. You can tell SPOPS to use them in one of two ways.
First, you can specify the information in your object configuration:
1: my $config = { 2: myobject => { 3: class => 'My::SPOPS', 4: field => [ qw/ my_id my_name my_date / ], 5: field_alter => { my_date => "DATE_FORMAT( my_date, '%Y/%m/%d %I:%i %p' )" }, 6: ..., 7: }, 8: };
Second, you can pass the information in on a per-object basis:
1: my $alter = { my_date => "DATE_FORMAT( my_date, '%Y/%m/%d %I:%i %p' )" }; 2: my $object = My::SPOPS->fetch( $object_id, { field_alter => $alter } );
Both will have exactly the same effect.
So, how would you do this in Perl and SPOPS? You would likely create a post_fetch rule that did whatever data manipulation you wanted:
1: sub ruleset_add { 2: my ( $class, $rs_table ) = @_; 3: push @{ $rs_table->{post_fetch_action} }, \&manipulate_date; 4: return ref $class || $class; 5: } 6: 7: sub manipulate_date { 8: my ( $self, $p ) = @_; 9: return 1 unless ( $self->{start_date} ); 10: my $start_date_object = Class::Date->new( $self->{start_date} ); 11: local $Class::Date::DATE_FORMAT = '%Y/%m/%d %I:%M %p'; 12: $self->{start_date} = "$start_date_object"; 13: }
See SPOPS::Manual::ObjectRules for more info on creating rulesets and what you can do with them.
Some data storage backends -- like LDAP -- can store multiple values for a single field. As of version 0.50, SPOPS can do the same.
All you need to do is specify in your configuration which fields should be multivalued:
1: multivalue => [ 'field1', 'field2' ]
Thereafter you can access them as below (more examples in SPOPS::Tie):
1: my $object = My::Object->new; 2: 3: # Set field1 to [ 'a', 'b' ] 4: $object->{field1} = [ 'a', 'b' ]; 5: 6: # Replace the value of 'a' with 'z' 7: $object->{field1} = { replace => { a => 'z' } }; 8: 9: # Add the value 'c' 10: $object->{field1} = 'c'; 11: 12: # Find only the ones I want 13: my @ones = grep { that_i_want( $_ ) } @{ $object->{field1} };
Note that the value returned from a field access to a multivalue field is always an array reference. If there are no values, the reference is empty.
If you ask, SPOPS will ensure that all get and set accesses are checked against the fields the object should have. You ask by setting the configuration option 'strict_field'. For instance:
1: $spops = { 2: user => { 3: class => 'My::User', 4: isa => [ qw/ SPOPS::DBI::Pg SPOPS::DBI / ], 5: field => [ qw/ first_name last_name login / ], 6: strict_field => 1, 7: ... 8: }, 9: }; 10: ... 11: my $user = My::User->new; 12: $user->{firstname} = 'Chucky';
would result in a message to STDERR, something like:
1: Error setting value for field (firstname): it is not a valid field 2: at my_tie.pl line 9
since you have misspelled the property. Note that SPOPS will continue working and will not 'die' on such an error, just issue a warning.
1: # Retrieve all themes and print a description 2: 3: my $themes = eval { $theme_class->fetch_group( { order => 'title' } ) }; 4: if ( $@ ) { ... report error ... } 5: else { 6: foreach my $thm ( @{ $themes } ) { 7: print "Theme: $thm->{title}\n", 8: "Description: $thm->{description}\n"; 9: } 10: } 1: # Create a new user, set some values and save 2: 3: my $user = $user_class->new; 4: $user->{email} = 'mymail@user.com'; 5: $user->{first_name} = 'My'; 6: $user->{last_name} = 'User'; 7: my $user_id = eval { $user->save }; 8: if ( $@ ) { 9: print "There was an error: $SPOPS::Error::system_msg\n" 10: } 11: 12: # Retrieve that same user from the database 13: 14: my $user_id = $cgi->param( 'user_id' ); 15: my $user = eval { $user_class->fetch( $user_id ) }; 16: if ( $@ ) { ... report error ... } 17: else { 18: print "The user's first name is: $user->{first_name}\n"; 19: } 1: # Create a new object with initial values, set another value and save 2: 3: my $data = MyClass->new({ field1 => 'value1', 4: field2 => 'value2' }); 5: print "The value for field2 is: $data->{field2}\n"; 6: $data->{field3} = 'value3'; 7: eval { $data->save }; 8: if ( $@ ) { ... report error ... } 9: 10: # Remove the object permanently 11: 12: eval { $data->remove }; 13: if ( $@ ) { ... report error ... } 14: 15: # Call arbitrary object methods to get other objects 16: 17: my $other_obj = eval { $data->call_to_get_other_object() }; 18: if ( $@ ) { ... report error ... } 19: 20: # Clone the object with an overridden value and save 21: 22: my $new_data = $data->clone({ field1 => 'new value' }); 23: eval { $new_data->save }; 24: if ( $@ ) { ... report error ... } 25: 26: # $new_data is now its own hashref of data -- 27: # explore the fields/values in it 28: 29: while ( my ( $k, $v ) = each %{ $new_data } ) { 30: print "$k == $v\n"; 31: } 32: 33: # Retrieve saved data 34: 35: my $saved_data = eval { MyClass->fetch( $id ) }; 36: if ( $@ ) { ... report error ... } 37: else { 38: while ( my ( $k, $v ) = each %{ $saved_data } ) { 39: print "Value for $k with ID $id is $v\n"; 40: } 41: } 42: 43: # Retrieve lots of objects, display a value and call a 44: # method on each 45: 46: my $data_list = eval { MyClass->fetch_group({ 47: where => "last_name like 'winter%'" }) }; 48: if ( $@ ) { ... report error ... } 49: else { 50: foreach my $obj ( @{ $data_list } ) { 51: print "Username: $obj->{username}\n"; 52: $obj->increment_login(); 53: } 54: }
See SPOPS::Manual for license.
Chris Winters <chris@cwinters.com>
|
http://search.cpan.org/~cwinters/SPOPS/SPOPS/Manual/Object.pod
|
CC-MAIN-2016-36
|
refinedweb
| 2,649
| 56.18
|
Introduction to Bubble Sort in Java
Bubble sort is one of the most commonly used algorithms for sorting data in Java. Sorting here is done by recursively comparing the adjacent numbers and shifting them in the increasing or decreasing order as required. This shifting of elements is done until all the digits are completely sorted in the required order.
The name “Bubble sort” of this algorithm is because the elements of an array bubble their way to the start of it. Let us understand the bubble sort algorithm by taking an example.
Example: Consider an array of numbers [6 1 8 5 3] that need to be arranged in the increasing order.
Bubble sort algorithm works in multiple iterations until it finds that all the numbers are sorted.
Iterations
Below are the iterations performed in Bubble Sort in Java which is as follows:
First Iteration[6 1 8 5 3] – It starts by comparing the first two numbers and shifts the lesser number of the two to its right. Hence among 6 and 1, 1 is the smaller number is shifted to the left and 6 to the right.
Second Iteration
Since the numbers are still not completely in their increasing order yet, the program goes for the second iteration.[1 6 5 3 8] – Here the comparison again starts from the first two digits of the result from the first iteration. It compares numbers 1 and 6 and retains the same order since 1 is smaller than 6. [1 6 5 3 8] – Here numbers 5 and 6 are compared. The same order is retained as it is already in the required increasing order. [1 5 6 3 8] – Here comparison takes place between numbers 6 and 3. Number 3 is shifted to the left as it is smaller than 6. [1 5 3 6 8] – Next numbers 6 and 8 are compared with each other. The same order is retained as it is in an expected order. [1 5 3 6 8] – This is the final result after the second iteration. Still, we can notice that the digits are not completely arranged in their increasing order. Still we need to exchange numbers 5 and 3 to get the final result. Hence the program goes for the third iteration.
Third Iteration[1 5 3 6 8] – the Third iteration starts by comparing the first two digits 1 and 5. Since the order is as expected it is retained the same. [1 5 3 6 8]- Next the adjacent numbers 3 and 5 are compared. Since 5 is larger than 3 it is shifted to the right side. [1 3 5 6 8] – The iteration goes on to compare numbers 5 and 6, 6 and 8. Since it is in the required order it retains the order. [1 3 5 6 8] – Finally the iteration is stopped as the program traverses comparing each adjacent element and finds that all the digits are in the increasing order.
Since here there were only 5 elements of an array which needed to be sorted, it took only 3 iterations in total. As the elements in the array increases, the amount of iterations also increases.
Bubble Sort Implementation using Java
Below is the Java code which is the implementation of the Bubble sort algorithm. (Note that the first position of an array in Java starts at 0 and continues in increments of 1 i.e. array[0], array[1], array[2] and it continues.)
Code:
import java.util.Scanner;
public class BubbleSort {
static void bubbleSort(int[] arraytest) {
int n = arraytest.length; //length of the array is initialized to the integer n
int temp = 0; //A temporary variable called temp is declared as an integer and initialized to 0
for(int i=0; i < n; i++){ // first for loop performs multiple iterations
for(int j=1; j < (n-i); j++){
if(arraytest[j-1] > arraytest[j]){ // if loop compares the adjacent numbers
// swaps the numbers
temp = arraytest[j-1]; // assigns the greater number to temp variable
arraytest[j-1] = arraytest[j]; // shifts the lesser number to the previous position
arraytest[j] = temp; // bigger number is then assigned to the right hand side
}
}
}
}
public static void main(String[] args) {
int arraytest[] ={23,16,3,42,75,536,61}; // defining the values of array
System.out.println("Array Before Doing Bubble Sort");
for(int i=0; i < arraytest.length; i++){ // for loop used to print the values of array
System.out.print(arraytest[i] + " ");
}
System.out.println();
bubbleSort(arraytest); // array elements are sorted using bubble sort function
System.out.println("Array After Doing Bubble Sort");
for(int i=0; i < arraytest.length; i++){
System.out.print(arraytest[i] + " "); // for loop to print output values from array
}
}
}
Output:
Advantages and Disadvantages of Bubble Sort in Java
Below are the different advantages and disadvantages of bubble sort in java:
Advantages
- The code is very easy to write and to understand. Typically takes only a few minutes.
- Implementation is also very easy.
- Bubble sorting sorts the numbers and keeps them in in-memory hence saves a lot of memory.
Disadvantages
- This algorithm is not suitable for large datasets as the comparison takes a lot of time. The time it takes to sort input numbers increases exponentially.
- O(n^2) is the average complexity of Bubble sort and O(n) is the best case complexity (best case is when the elements are sorted in the first place) where n is the number of elements.
Real-Time Applications
Since Bubble sort is capable of detecting minute errors in sorting, it is used in computer graphics. It is also used in the polygon filling algorithm where the vertices lining of the polygon needs to be sorted.
Conclusion
In this article, we saw how the Bubble sort algorithm works and how it can be implemented using Java programming. Bubble sort is a very stable algorithm that can be easily implemented for comparatively small datasets. It is a case of comparison algorithm and is used by novices due to its simplicity.
Recommended Articles
This is a guide to Bubble Sort in Java. Here we discuss multiple iterations to perform bubble sort in java and its code implementation along with advantages and disadvantages. You may also look at the following articles to learn more –
|
https://www.educba.com/bubble-sort-in-java/?source=leftnav
|
CC-MAIN-2020-34
|
refinedweb
| 1,048
| 62.38
|
in reply to use common::sense;
Because you didn't meditate. And you didn't even make it clear that you were seeking wisdom from others. You just pointed and gesticulated.
As for the module, I was struck with the author's need to include the quotes. It struck me as a demonstration of "I'm so arrogant, I'll include the quotes decrying my arrogance to demonstrate the input I was able to effectively ignore due to my arrogance." Meh.
The module name sucks. All-lowercase. Invents a new top-level namespace that sucks as a top-level namespace. The name maxes out on cuteness at the expense of giving the audience a good clue as to the purpose of the module.
If it were even called Devel::CommonSense, then I'd have little criticism to offer.
Update:
It does use less memory than strict/warnings :)
Actually, for the vast majority of the Perl world, it uses more memory, because it doesn't prevent strict/warnings from being loaded. So, in the author's mindset, it kills trees and kittens... perhaps just not his trees/kittens. Lousy justification.
The customization of warnings is the interesting part. Not something I'll use myself for several reasons.
I disagree that making warnings fatal is a great idea. Even if it magically only makes bugs fatal, I expect it to be a mixed blessing. Fatality can be useful in making bugs extra obvious, forcing them to be fixed (or at least worked around). But warnings are not particularly hard to notice so the added benefit of fatality during development / testing is not clear-cut. Meanwhile, continuing on past a subtle bug can be useful once in Production. The problem with warnings is that they tend to depend very much on context so fatal errors like those exposed by strict.pm are almost always found during development and testing while warnings are much more likely to survive to Production, IME. And a fatal warning is almost always a serious impact in Production while the subtle bug is often the better choice. But it is a good experiment to try.
Also, the module's role is unsettled. If I started using it now, an upgrade to it could easily take a warning that was previously being ignored and instead make it fatal.
The horrid name and the section devoted to demonstrating the author's arrogance are also among the reasons I'm unlikely to use it, of course.
Update: Oh, and I also often (usually, these days) program in a mode where 'undef' warnings indicate a mistake. Yes, the signal-to-noise ratio of that particular warning is likely the worst of any Perl warning. If you didn't consciously choose to use 'undef' as an indicator of "I forgot to initialize that", then it is an annoyingly useless warning. Even when you've made that choice, it does it's best to tempt you to unmake that choice.
But even more so I wish I could use autovivification as an indicator of a mistake. I often (usually, these days) program in a mode where autovivification is a mistake. I just can't convince Perl to tell me when it resorts to autovivification. I've long wanted a "no autovivification;" pragma.
Actually, one should have the choice to make it always fatal or just make it always a warning, including controling whether using undef as a reference in a non-lvalue context should be considered a fatal "symbolic reference" under "use strict 'refs'" (as the module's author rightly complains about).
I guess that is another place where I disagree with the module's author. Although I sometimes dislike that "use strict 'refs'" makes that particular form of autovivification fatal, I don't consider it worth throwing out the "use strict 'refs'" baby to get rid of that particular autovivification bathwater.
- tye
My spouse
My children
My pets
My neighbours
My fellow monks
Wild Animals
Anybody
Nobody
Myself
Spies
Can't tell (I'm NSA/FBI/HS/...)
Others (explain your deviation)
Results (52 votes). Check out past polls.
|
http://www.perlmonks.org/?node_id=804870
|
CC-MAIN-2016-50
|
refinedweb
| 684
| 55.03
|
Core Graphics Tutorial Part 3: Patterns and Playgrounds
Welcome back to the third and final part of the Core Graphics tutorial series! Flo, your water drinking tracking app, is ready for its final evolution, which you’ll make happen with Core Graphics.
In part one, you drew three custom-shaped controls with UIKit. Then in the part two, you created a graph view to show the user’s water consumption over a week, and you explored transforming the context transformation matrix (CTM).
In this third and final part of our Core Graphics tutorial, you’ll take Flo to its final form. Specifically, you’ll:
- Create a repeating pattern for the background.
- Draw a medal from start to finish to award the users for successfully drinking eight glasses of water a day.
If you don’t have it already, download a copy of the Flo project from the second part of this series.
Background Repeating Pattern
Your mission in this section is to use UIKit’s pattern methods to create this background pattern:
Note: If you need to optimize for speed, then work through Core Graphics Tutorial: Patterns which demonstrates a basic way to create patterns with Objective-C and Core Graphics. For most purposes, like when the background is only drawn once, UIKit’s easier wrapper methods should be acceptable.
Go to File\New\File… and select the iOS iOS\Source\Cocoa Touch Class template to create a class named BackgroundView with a subclass of UIView. Click Next and then Create.
Go to Main.storyboard, select the main view of
ViewController, and change the class to BackgroundView in the Identity Inspector.
Set up BackgroundView.swift and Main.storyboard so they are side-by-side, using the Assistant Editor.
Replace the code in BackgroundView.swift with:
import UIKit @IBDesignable class BackgroundView: UIView { //1 @IBInspectable var lightColor: UIColor = UIColor.orange @IBInspectable var darkColor: UIColor = UIColor.yellow @IBInspectable var patternSize: CGFloat = 200 override func draw(_ rect: CGRect) { //2 let context = UIGraphicsGetCurrentContext()! //3 context.setFillColor(darkColor.cgColor) //4 context.fill(rect) } }
The background view of your storyboard should now be yellow. More detail on the above code:
lightColorand
darkColorhave
@IBInspectableattributes so it’s easier to configure background colors later on. You’re using orange and yellow as temporary colors, just so you can see what’s happening.
patternSizecontrols the size of the repeating pattern. It’s initially set to large, again so it’s easy to see what’s happening.
UIGraphicsGetCurrentContext()gives you the view’s context and is also where
draw(_ rect:)draws.
- Use the Core Graphics method
setFillColor()to set the current fill color of the context. Notice that you need to use
CGColor, a property of
darkColorwhen using Core Graphics.
- Instead of setting up a rectangular path,
fill()fills the entire context with the current fill color.
You’re now going to draw these three orange triangles using
UIBezierPath(). The numbers correspond to the points in the following code:
Still in BackgroundView.swift, add this code to the end of
draw(_ rect:):
let drawSize = CGSize(width: patternSize, height: patternSize) //insert code here let trianglePath = UIBezierPath() //1 trianglePath.move(to: CGPoint(x: drawSize.width/2, y: 0)) //2 trianglePath.addLine(to: CGPoint(x: 0, y: drawSize.height/2)) //3 trianglePath.addLine(to: CGPoint(x: drawSize.width, y: drawSize.height/2)) //4 trianglePath.move(to: CGPoint(x: 0,y: drawSize.height/2)) //5 trianglePath.addLine(to: CGPoint(x: drawSize.width/2, y: drawSize.height)) //6 trianglePath.addLine(to: CGPoint(x: 0, y: drawSize.height)) //7 trianglePath.move(to: CGPoint(x: drawSize.width, y: drawSize.height/2)) //8 trianglePath.addLine(to: CGPoint(x: drawSize.width/2, y: drawSize.height)) //9 trianglePath.addLine(to: CGPoint(x: drawSize.width, y: drawSize.height)) lightColor.setFill() trianglePath.fill()
Notice how you use one path to draw three triangles.
move(to:) is just like lifting your pen from the paper when you’re drawing and moving it to a new spot.
Your storyboard should now have an orange and yellow image at the top left of your background view.
So far, you’ve drawn directly into the view’s drawing context. To be able to repeat this pattern, you need to create an image outside of the context, and then use that image as a pattern in the context.
Find the following. It’s close to the top of
draw(_ rect:), but after the initial context calls:
let drawSize = CGSize(width: patternSize, height: patternSize)
Add the following code where it conveniently says Insert code here:
UIGraphicsBeginImageContextWithOptions(drawSize, true, 0.0) let drawingContext = UIGraphicsGetCurrentContext()! //set the fill color for the new context darkColor.setFill() drawingContext.fill(CGRect(x: 0, y: 0, width: drawSize.width, height: drawSize.height))
Hey! Those orange triangles disappeared from the storyboard. Where’d they go?
UIGraphicsBeginImageContextWithOptions() creates a new context and sets it as the current drawing context, so you’re now drawing into this new context. The parameters of this method are:
- The size of the context.
- Whether the context is opaque — if you need transparency, then this needs to be false.
- The scale of the context. If you’re drawing to a retina screen, this should be 2.0, and if to an iPhone 6 Plus, it should be 3.0. However, this uses 0.0, which ensures the correct scale for the device is automatically applied.
Then you used
UIGraphicsGetCurrentContext() to get a reference to this new context.
You then filled the new context with yellow. You could have let the original background show through by setting the context opacity to false, but it’s faster to draw opaque contexts than it is to draw transparent, and that’s argument enough to go opaque.
Add this code to the end of
draw(_ rect:):
let image = UIGraphicsGetImageFromCurrentImageContext()! UIGraphicsEndImageContext()
This extracts a
UIImage from the current context. When you end the current context with
UIGraphicsEndImageContext(), the drawing context reverts to the view’s context, so any further drawing in
draw(_ rect:) happens in the view.
To draw the image as a repeated pattern, add this code to the end of
draw(_ rect:):
UIColor(patternImage: image).setFill() context.fill(rect)
This creates a new
UIColor by using an image as a color instead of a solid color.
Build and run the app. You should now have a rather bright background for your app.
Go to Main.storyboard, select the background view, and in the Attributes Inspector change the
@IBInspectable values to the following:
- Light Color: RGB(255, 255, 242)
- Dark Color: RGB(223, 255, 247)
- Pattern Size: 30
Experiment a little more with drawing background patterns. See if you can get a polka dot pattern as a background instead of the triangles.
And of course, you can substitute your own non-vector images as repeating patterns.
Drawing Images
In the final stretch of this tutorial, you’ll make a medal to handsomely reward users for drinking enough water. This medal will appear when the counter reaches the target of eight glasses.
I know that’s certainly not a museum-worthy piece of art, so please know that I won’t be offended if you improve it, or even take it to the next level by drawing a trophy instead of a medal.
Instead of using
@IBDesignable, you’ll draw it in a Swift Playground, and then copy the code to a
UIImageView subclass. Though interactive storyboards are often useful, they have limitations; they only draw simple code, and storyboards often time out when you create complex designs.
In this particular case, you only need to draw the image once when the user drinks eight glasses of water. If the user never reaches the target, there’s no need to make a medal.
Once drawn, it also doesn’t need to be redrawn with
draw(_ rect:) and
setNeedsDisplay().
Time to put the brush to the canvas. You’ll build up the medal view using a Swift playground, and then copy the code into the Flo project when you’ve finished.
Go to File\New\Playground…. Choose the Blank template, click Next, name the playground MedalDrawing and then click Create.
In the new playground window, replace the playground code with:
import UIKit let size = CGSize(width: 120, height: 200) UIGraphicsBeginImageContextWithOptions(size, false, 0.0) let context = UIGraphicsGetCurrentContext()! //This code must always be at the end of the playground let image = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext()
This creates a drawing context, just as you did for the patterned image.
Take note of these last two lines; you always need them at the bottom of the playground so you can preview the image in the playground.
Next, in the gray results column click the square button to the right of this code:
let image = UIGraphicsGetImageFromCurrentImageContext()
This will place a preview image underneath the code. The image will update with every change that you make to the code.
It’s often best to do a sketch to wrap your head around the order you’ll need to draw the elements — look at the “masterpiece” I made while conceptualizing this tutorial:
This is the order in which to draw the medal:
- The back ribbon (red)
- The medallion (gold gradient)
- The clasp (dark gold)
- The front ribbon (blue)
- The number 1 (dark gold)
Remember to keep the last two lines of the playground (where you extract the image from the context at the very end), and add this drawing code to the playground before those lines:
First, set up the non-standard colors you need.
//Gold colors let darkGoldColor = UIColor(red: 0.6, green: 0.5, blue: 0.15, alpha: 1.0) let midGoldColor = UIColor(red: 0.86, green: 0.73, blue: 0.3, alpha: 1.0) let lightGoldColor = UIColor(red: 1.0, green: 0.98, blue: 0.9, alpha: 1.0)
This should all look familiar by now. Notice that the colors appear in the right margin of the playground as you declare them.
Add the drawing code for the red part of the ribbon:
//Lower Ribbon let lowerRibbonPath = UIBezierPath() lowerRibbonPath.move(to: CGPoint(x: 0, y: 0)) lowerRibbonPath.addLine(to: CGPoint(x: 40, y: 0)) lowerRibbonPath.addLine(to: CGPoint(x: 78, y: 70)) lowerRibbonPath.addLine(to: CGPoint(x: 38, y: 70)) lowerRibbonPath.close() UIColor.red.setFill() lowerRibbonPath.fill()
Nothing too new here, just creating a path and filling it. You should see the red path appear in the right hand pane.
Add the code for the clasp:
//Clasp let claspPath = UIBezierPath(roundedRect: CGRect(x: 36, y: 62, width: 43, height: 20), cornerRadius: 5) claspPath.lineWidth = 5 darkGoldColor.setStroke() claspPath.stroke()
Here you make use of
UIBezierPath(roundedRect:) with rounded corners by using the
cornerRadius parameter. The clasp should draw in the right pane.
Add the code for the medallion:
//Medallion let medallionPath = UIBezierPath(ovalIn: CGRect(x: 8, y: 72, width: 100, height: 100)) //context.saveGState() //medallionPath.addClip() let colors = [darkGoldColor.cgColor, midGoldColor.cgColor, lightGoldColor.cgColor] as CFArray let gradient = CGGradient(colorsSpace: CGColorSpaceCreateDeviceRGB(), colors: colors, locations: [0, 0.51, 1])! context.drawLinearGradient(gradient, start: CGPoint(x: 40, y: 40), end: CGPoint(x: 40, y: 162), options: []) //context.restoreGState()
Notice the commented out the lines. These are here to temporarily show how the gradient is drawn:
To put the gradient on an angle, so that it goes from top-left to bottom-right, change the end x coordinate of the gradient. Alter the
drawLinearGradient() code to:
context.drawLinearGradient(gradient, start: CGPoint(x: 40, y: 40), end: CGPoint(x: 100, y: 160), options: [])
Now uncomment those three lines in the medallion drawing code to create a clipping path to constrain the gradient within the medallion’s circle.
Just as you did when drawing the graph in Part 2 of this series, you save the context’s drawing state before adding the clipping path and restore it after the gradient is drawn so that the context is no longer clipped.
To draw the solid internal line of the medal, use the medallion’s circle path, but scale it before drawing. Instead of transforming the whole context, you’ll just apply the transform to one path.
Add this code after the medallion drawing code:
//Create a transform //Scale it, and translate it right and down var transform = CGAffineTransform(scaleX: 0.8, y: 0.8) transform = transform.translatedBy(x: 15, y: 30) medallionPath.lineWidth = 2.0 //apply the transform to the path medallionPath.apply(transform) medallionPath.stroke()
This scales the path down to 80 percent of its original size, and then translates the path to keep it centered within the gradient view.
Add the upper ribbon drawing code after the internal line code:
//Upper Ribbon let upperRibbonPath = UIBezierPath() upperRibbonPath.move(to: CGPoint(x: 68, y: 0)) upperRibbonPath.addLine(to: CGPoint(x: 108, y: 0)) upperRibbonPath.addLine(to: CGPoint(x: 78, y: 70)) upperRibbonPath.addLine(to: CGPoint(x: 38, y: 70)) upperRibbonPath.close() UIColor.blue.setFill() upperRibbonPath.fill()
This is very similar to the code you added for the lower ribbon: making a bezier path and filling it.
The last step is to draw the number one on the medal. Add this code after the upper ribbon code:
//Number One //Must be NSString to be able to use draw(in:) let numberOne = "1" as NSString let numberOneRect = CGRect(x: 47, y: 100, width: 50, height: 50) let font = UIFont(name: "Academy Engraved LET", size: 60)! let numberOneAttributes = [ NSAttributedStringKey.font: font, NSAttributedStringKey.foregroundColor: darkGoldColor ] numberOne.draw(in: numberOneRect, withAttributes: numberOneAttributes)
Here you define a
NSString with text attributes, and draw it into the drawing context using
draw(_in:).
Looking good!
You’re getting close, but it’s looking a little two-dimensional. It would be nice to have some drop shadows.
Shadows
To create a shadow, you need three elements: color, offset (distance and direction of the shadow) and blur.
At the top of the playground, after defining the gold colors but just before the
//Lower Ribbon line, insert this shadow code:
//Add Shadow let shadow: UIColor = UIColor.black.withAlphaComponent(0.80) let shadowOffset = CGSize(width: 2.0, height: 2.0) let shadowBlurRadius: CGFloat = 5 context.setShadow(offset: shadowOffset, blur: shadowBlurRadius, color: shadow.cgColor)
That makes a shadow, but the result is probably not what you pictured. Why is that?
When you draw an object into the context, this code creates a shadow for each object.
Ah-ha! Your medal comprises five objects. No wonder it looks a little fuzzy.
Fortunately, it’s pretty easy to fix. Simply group drawing objects with a transparency layer, and you’ll only draw one shadow for the whole group.
Add the code to make the group after the shadow code. Start with this:
context.beginTransparencyLayer(auxiliaryInfo: nil)
When you begin a group you also need to end it, so add this next block at the end of the playground, but before the point where you retrieve the final image:
context.endTransparencyLayer()
Now you’ll have a completed medal image with clean, tidy shadows:
That completes the playground code, and you have a medal to show for it!
Adding the Medal Image to an Image View
Now that you’ve got the code in place to draw a medal (which looks fabulous, by the way), you’ll need to render it into a
UIImageView in the main Flo project.
Switch back to the Flo project and create a new file for the image view.
Click File\New\File… and choose the Cocoa Touch Class template. Click Next , and name the class MedalView. Make it a subclass of UIImageView, then click Next, then click Create.
Go to Main.storyboard and add a UIImageView as a subview of Counter View. Select the UIImageView, and in the Identity Inspector change the class to MedalView.
In the Size Inspector, give the Image View the coordinates X=76, Y=147, Width=80, and Height=80:
In the Attributes Inspector, change the Content Mode to Aspect Fit, so that the image automatically resizes to fit the view.
Go to MedalView.swift and add a method to create the medal:
func createMedalImage() -> UIImage { println("creating Medal Image") }
This creates a log so that you know when the image is being created.
Switch back to your MedalDrawing playground, and copy the entire code except for the initial
import UIKit.
Go back to MedalView.swift and paste the playground code into
createMedalImage().
At the end of
createMedalImage(), add:
return image!
That should squash the compile error.
At the top of the class, add a property to hold the medal image:
lazy var medalImage: UIImage = self.createMedalImage()
The lazy declaration modifier means that the medal image code, which is computationally intensive, only draws when necessary. Hence, if the user never records drinking eight glasses, the medal drawing code will never run.
Add a method to show the medal:
func showMedal(show: Bool) { image = (show == true) ? medalImage : nil }
Go to ViewController.swift and add an outlet at the top of the class:
@IBOutlet weak var medalView: MedalView!
Go to Main.storyboard and connect the new MedalView to this outlet.
Go back to ViewController.swift and add this method to the class:
func checkTotal() { if counterView.counter >= 8 { medalView.showMedal(show: true) } else { medalView.showMedal(show: false) } }
This shows the medal if you drink enough water for the day.
Call this method at both the end of
viewDidLoad() and
pushButtonPressed(_:):
checkTotal()
Build and run the application. It should look like this:
In the debug console, you’ll see the creating Medal Image log only outputs when the counter reaches eight and displays the medal, since
medalImage uses a lazy declaration.
Where to Go From Here?
You’ve come a long way in this epic Core Graphics tutorial series. You’ve mastered the basics of Core Graphics: drawing paths, creating patterns and gradients, and transforming the context. To top it all off, you learned how to put it all together in a useful app.
Download the complete version of Flo right here. This version also includes extra sample data and radial gradients to give the buttons a nice UI touch so they respond when pressed.
I hope you enjoyed making Flo, and that you’re now able to make some stunning UIs using nothing but Core Graphics and UIKit! If you have any questions, comments, or you want to hash out how to draw a trophy instead of a medal, please join the forum discussion below.
Team
Each tutorial at is created by a team of dedicated developers so that it meets our high quality standards. The team members who worked on this tutorial are:
- Author
Andrew Kharchyshyn
- Tech Editor
Alex Curran
- Editor
Chris Belanger
- Final Pass Editor
James Frost
- Team Lead
Andy Obusek
|
https://www.raywenderlich.com/167352/core-graphics-tutorial-part-3-patterns-playgrounds
|
CC-MAIN-2018-13
|
refinedweb
| 3,094
| 58.79
|
1: public class Advent5 : IDisposable
2: {
3: private IFileUtil m_file;
4:
5: public void Dispose()
6: {
7: m_file.Delete();
8: }
9:
10: [Fact]
11: public void TestReadOK()
12: {
13: // Create a test file
14: m_file = new FileUtil("SomeFile.txt");
15: m_file.Create("CONTENT");
16:
17: // Should be able to read file
18: Assert.DoesNotThrow(() => { m_file.Read(); });
19: }
20:
21: [Fact]
22: public void TestReadFails()
23: {
24: // Create a test file
25: m_file = new FileUtil("SomeFile.txt");
26: m_file.Create("CONTENT");
27:
28: m_file.Readable = false;
29:
30: // Should NOT be able to read file.
31: Assert.Throws<AccessViolationException>(() => { m_file.Read(); });
32: }
33: }
Now we're using Xunit-net's clean up mechanism, the IDisposableinterface. We still got some redundant code in the setup part of each test. Next I'll refactor that.
Hmm. Would it not be better to implement IDisposable on your tests, rather than use a destructor (finalizer)?
Using the destructor, your tear down code will get called non-deterministically, some time after a garbage collection, and I’m assuming xUnit doesn’t force a GC between test runs. So, theoretically, your file might not get deleted if the runner exits before a GC. Or, more relevantly, the file might not (probably won’t?) get deleted between two tests, causing unexpected side effects.
If you use IDisposable, I would expect xUnit to explicitly clean up the class before moving onto the next test.
Does this sound about right?
Cheers
Matt
Yes you are more than correct. I had a brain malfunction writing these tests and was stuck in C++. And you got me before I had the chance to change things… Obviously the finalizer/destructor is wrong. IDisposable is the right way and xUnit.net makes sure Dispose is called between each test.
It is not an excuse but an explanaition, I was in a hurry when I wrote this in the wrong way and working mainly in C++ i messed up… But from now on the tests will be (more) correct…
Nice to see that someone is actually reading these tests… 🙂
|
https://blogs.msdn.microsoft.com/cellfish/2008/12/05/2008-advent-calendar-december-5th/
|
CC-MAIN-2017-47
|
refinedweb
| 344
| 66.44
|
Will
null instanceof SomeClass return
false or throw a
NullPointerException?
No, a null check is not needed before using instanceof.
The expression
x instanceof SomeClass is
false if
x is
null.
From the Java Language Specification, at
“At run time, the result of the
instanceof operator is true if the
value of the RelationalExpression is
not null and the reference could be
cast (§15.16) to the ReferenceType
without raising a ClassCastException.
Otherwise the result is false.”
So if the operand is null, the result is false.
Using a null reference as first operand with
instanceof, returns
false.
(It takes 1 minute to try it)
Very good question indeed. I just tried for myself.
public class IsInstanceOfTest { public static void main(final String[] args) { String s; s = ""; System.out.println((s instanceof String)); System.out.println(String.class.isInstance(s)); s = null; System.out.println((s instanceof String)); System.out.println(String.class.isInstance(s)); } }
Prints
true true false false
JLS / 15.20.2. Type Comparison Operator instanceof
At run time, the result of the
instanceofoperator is
trueif the value of the RelationalExpression is not
nulland the reference could be cast to the ReferenceType without raising a
ClassCastException. Otherwise the result is
false.
API / Class#isInstance(Object)
If this
Classobject represents an interface, this method returns
trueif the class or any superclass of the specified
Objectargument implements this interface; it returns
falseotherwise. If this
Classobject represents a primitive type, this method returns
false.
No, it’s not.
instanceof would return
false if its first operand is
null.
Java literal
null is not an instance of any class. Therefore it can not be an instanceof any class.
instanceof will return either
false or
true therefore the
<referenceVariable> instanceof <SomeClass> returns
false when
referenceVariable value is null.
The
instanceof operator does not need explicit
null checks, as it does not throw a
NullPointerException if the operand is
null.
At run time, the result of the
instanceof operator is true if the value of the relational expression is not
null and the reference could be cast to the reference type without raising a class cast exception.
If the operand is
null, the
instanceof operator returns
false and hence, explicit null checks are not required.
Consider the below example,
public static void main(String[] args) { if(lista != null && lista instanceof ArrayList) { //Violation System.out.println("In if block"); } else { System.out.println("In else block"); } }
The correct usage of
instanceof is as shown below,
public static void main(String[] args) { if(lista instanceof ArrayList){ //Correct way System.out.println("In if block"); } else { System.out.println("In else block"); } }
|
https://exceptionshub.com/is-null-check-needed-before-calling-instanceof.html
|
CC-MAIN-2022-05
|
refinedweb
| 435
| 50.94
|
[FNR] [2.1.1 / 2.1.3] Tooltip bug and tooltip shadow issue in IE8
[FNR] [2.1.1 / 2.1.3] Tooltip bug and tooltip shadow issue in IE8
In IE8, there's a 1px gap between the top of the tooltip and middle of the tooltip. It only appears when the browser is in quirksmode, but the instructions for GXT still say that GXT requires quirksmode. This occurs in both GXT 2.1.1 and GXT 2.1.3.
While testing the same bug to see if it was present in GXT 2.1.3 I noticed the tooltip shadow is too large. This also only occurs in quirksmode. This bug does not seem to exist in GXT 2.1.1.
See attached screenshot to see what it looks like in GXT 2.1.3.
gxt_2_1_x_bug.png
Code:
public class GXT2_1_3Control implements EntryPoint { private void ie8Tooltip() { Button but = new Button("Test button"); ToolTipConfig c = new ToolTipConfig("Hello world"); c.setDismissDelay(500000); c.setHideDelay(500000); but.setToolTip(c); RootPanel.get().add(but); } /** * This is the entry point method. */ public void onModuleLoad() { ie8Tooltip(); } }
Fixed in SVN as of revision 2045 for GXT 2.2. Can you validate this?
I missed the css files. CSS file update is in 2046
Thank you for reporting this bug. We will make it our priority to review this report.
|
http://www.sencha.com/forum/showthread.php?96985-FNR-2.1.1-2.1.3-Tooltip-bug-and-tooltip-shadow-issue-in-IE8
|
CC-MAIN-2015-14
|
refinedweb
| 225
| 76.82
|
19 March 2010 14:55 [Source: ICIS news]
By Malini Hariharan
A solution to the Mab Ta Phut mess is slowly emerging but the issue, which resulted in the suspension of 65 projects by the Thai Supreme Court last December, is likely to force Thai chemical companies to invest overseas for future growth.
The government has put in place a committee that is working on setting up an independent agency to assess environmental and health impact of all industrial projects as required by the country’s 2007 constitution.
A few of the stalled projects have also received court permission to resume construction and carry out testing though they will have to wait for clearance of health impact assessment (HIA) studies before they start commercial operations.
A one-year delay to projects looks very likely, says a Bangkok-based analyst.
But even if the suspended projects are completed, chemical companies, Thai or foreign, will not find it easy to expand further at Mab Ta Phut.
Activists also claim that Mab Ta Phut has reached the limits of its environmental carrying capacity.
“In my opinion Mab Ta Phut needs a big clean up. The government should focus on this rather than further expansions. But there is no government policy on this,” says Penchom Saetang of Ecological Alert and Recovery ?xml:namespace>
And even if pollution at the site can be reduced, the local population is unlikely to welcome any large investments. “It is now almost impossible,” she adds.
The government’s long overdue plan of developing an alternate petrochemical hub on the southern seaboard may also not materialise. So far there has been little progress on the ground and the environmental lobby is now likely to block development.
“Non governmental organisations (NGOs) have already moved there activating public opinion against chemical investments. So it will be very difficult,” the analyst points out.
Mab Ta Phut has set a precedent for others to follow.
“In every province [in southern
And if the environmental issue is not enough, politics is also denting
Multinationals are increasingly likely to look at other countries in the region for future investment.
“Even Thai companies will have to move out. The Siam Cement Group (SCG) is already doing this and it will be easy for them as they are focused on naphtha crackers,” says the analyst.
SCG, which has a number of investments in Mab Ta Phut, is now working on a cracker and derivatives project in
But the road overseas might not be easy for PTT Chem, the country’s largest petrochemicals player, as it is likely to be guided by its parent PTT which has focused overseas investments in upstream oil, gas and coal.
But PTT’s strike rate in exploration and production outside the country is not good, points out the analyst.
It could be a long wait if PTT Chem has to rely on the parent to be successful in the upstream business to provide the feedstocks for petrochemicals.
|
http://www.icis.com/Articles/2010/03/19/9344408/insight-mab-ta-phut-crisis-will-force-thai-companies-to-look-overseas.html
|
CC-MAIN-2014-52
|
refinedweb
| 494
| 57.5
|
This internal memo linked from slashdot caught my attention. It details why Java should not be used in Sun's own production software. Summary is "Java implementation sucks" and here is some input from me.
Back in 97 I had used Java to write a small 3d graphics library, which I found to be quite poor in performance. Years later in 2001, since there was a new version of Java I wanted to give it another shot.
Here is the paragraph from the Sun memo that really gave me the chills: Further examples of what is possible include the compiling OO languages Eiffel and Sather which fit their garbage collector, exception processor and other infrastructure into roughly 400K of resident set. While the Java VM (as demonstrated above) grows rapidly as more complex code is executed, the Python VM grows quite slowly. Indeed, an inventory control program written entirely in Python having a SQL database, a curses UI, and network connectivity requires only 1.7M of resident set. This seems to indicate that the resident set requirements of the JRE could be reduced by at least 80%.
Aha! That VM problem I couldn't know better. I implemented a graph-based clustering algorithm and even integrated it in Weka (a machine learning framework written in Java). Since the language also sucked it wasn't so easy to write it (you have to do casts everywhere when you implement a generic data structure). Anyway, it wasn't that different from writing in C++. The crux of the matter was the awful performance. It was something like training data of a few thousand elements being processed and input to the clustering algorithm. It performed awfully bad, consuming about 100M of memory! It took several minutes to run, when the quite efficient algorithm, if written in C++, would probably complete in a matter of seconds. What it does is, it makes a similarity graph and then does agglomerative clustering in a multi-level fashion like Karpyis's code with some new heuristics that I was trying. So it should have a complexity close to O(E) in practice, it wouldn't take minutes... In whatever way you view the problem, it was unacceptable. The whole Java VM concept was so flawed you couldn't run a single not-so-expensive algorithm with acceptable performance on this system.
There are even occasional idiots on beowulf mailing list and other parallel programming forums and sometimes academic conferences that claim to be using Java for HIGH PERFORMANCE COMPUTING. That is the worst joke of the century. Java was not meant for the high performance domain where you take things like cache coherency into account while implementing your algorithm. You have to think twice before you say "high performance" in a runtime that is an absolute memory hog.
Well, of course the Java advocates are not so aware of this problem because much of what they do is slow stupid GUIs and web annoyances. They think the performance is acceptable to them.
Not to me.
I said back in 97 that Java sucked and showed as the proof the lack of desktop applications on ANY system. Now it's 2003, I am saying that Java sucks and I'm showing the lack of desktop applications on ANY system to prove it. Here I look at my debian box, no significant Java apps, maybe except Freenet which isn't a desktop app.
No surprise, even Sun engineers know that Java can't be used for any serious application.
__
exa?
It's important to look at how Java is really being used. Mostly, it's for web applications, which consist of a fairly thin layer of glue between the web server and some data stores. In this setup, the problems of Java don't show up too much. There's only one VM which is running all the time. Startup time isn't big. You generally do very little per request, and you have so much communication that fast processing wouldn't help you out.
Most of this programming doesn't use anything fancy enough to be called an algorithm. There are exceptions, of course, but they're rare enough you can devote extra attention to them, by someone who knows enough about the Java language and the way it's implemented to know the performance pitfalls.
I'd guess that you could have avoided most of the performance problems on your clustering algorithm by careful avoidance of copies and memory allocation. Java makes it easy to ignore these costs, but it doesn't make them go away.
I'm not trying to be a Java apologist here, because your example does show that the hype of Java isn't all true. Writing Java doesn't free you from worrying about memory allocation (or other things they claim are harder in C or C++) in all cases. It may be better than C/C++ in some ways, but a balanced view of it is necessary. And if you want a language that makes things substantially better, look somewhere else.
I firmly believe that the choice of Java has consigned Freenet to obscurity, which saddens me greatly.
I've heard that Java's GC just isn't very good, compared to how well a GC could work. For instance, here's JWZ's assessment. But a quick check couldn't find any recent analyses of Java's GC vs. the state of the art. I know that it's using a generational GC these days, but I don't know how good it is. (I do know that objects can get old enough that they're collected once a day, which was annoying to us on one project....)
Does anyone know more?
Look at job boards like Dice, and the numbers are quite depressing:Java: 2842 C++: 2206 Perl: 839 C#: 322 Tcl: 86 Python: 71 Lisp: 12 Ruby: 5Management, in many cases, wants Java.
I don't think it's the wrong tool for the job in all cases - it's not bad for some things, but I don't think it's the right tool in a lot of other situations.
It's funny to read someone complaining about the lack of java desktop apps just days after the announcement of a Java branch at the open source directory :
I contribute to Kaffe OpenVM. Guess what's the most common issue that makes a lot of Mandrake Linux users deinstall kaffe and install Sun's JDK? They want to run LimeWire, a desktop application for file sharing written in Java. That's what a simple google news search for kaffe and mandrake will tell you. And it's just one of several file sharing desktop applications written in Java. I'd assume that a lot more Joe Average users run file sharing apps written in Java then graph-based clustering algorithms written in C++ ;)
Fancy a mail client? Check out Columba, or ICEMail.
Do you want a text editor with that? Check out Jext. Or just get a full IDE, while you're at it: NetBeans, Forte, JBuilder and Eclipse are all written in Java and run on a, you guessed it, desktop.
I don't feel like spamming advogato with links to many other java desktop apps, from X11 server to ogg player, from office suite to XML editor. You can get some of them packaged as RPMs at JPackage. You can find others using your favorite search engine. Many java applications are listed on JARS. Just because Debian's doesn't have something packaged doesn't mean it doesn't exist ;)
If you want to argue memory usage, you should note that Sun's implementation is not the only one out there. For example, in the tests on , kaffe 1.0.6 uses up to 20 times less memory than Sun's JDK.
And to go with JWZ's assessment, have a look at Uniprocessor Garbage Collection Techniques for some clues. (Yes, that was directed at you, tjansen.) Great reading, anyway.
I have yet to see how to make GC work efficiently without combining it with reference counts (your link is currently broken, johs). And I am certainly not the only one with that opinion, see for example Linus Torvalds GC comment on the gcc mailing list
For these "high performance" applications in question, have people compiled them with gcj (GNU Java Compiler) and run as native executables? With a traditional compiler, do the GC-related and other performance problems still exist?
How do C# (DotGNU Portable.Net) and Pike, two other Java-like scripting languages, fare for these high performance applications?
It seems like Java throws away all the experience of implementing modern operating systems. Class libraries don't seem to be shared, executables are (very) heavyweight, libraries aren't versioned.
Well, let's see. The Java language specification requires that an implementation include GC. That means that Kaffe, and the Sun NT implementation, and all those other Java implementations that apparently don't suck by comaprison to the Sun Solaris implementation under discussion, must also contain a GC. Not to mention Eiffel and Sather, also cited in the original memo, also having GC, and also not sucking.
You are of course welcome to your humble(sic) opinion that the reason Solaris Java sucks is an inherent property of garbage collection, but on the evidence available I have to suspect there may be other factors involved.
Incidentally, the link to Paul Wilson's paper works fine for me, but note that it's ftp and the site may have been full when you tried. Try it via http instead:. It's a very well-known paper.
Side note: as you say, in theory GCs could be at least as fast as manual memory management or reference counting (and sometimes, faster: consider the amount of pointer chasing involved in freeing a big collection of objects which all reference each other). Even in practice you can get an awful lot closer than Solaris Java apparently does. For example, use a generational GC, and size the nursery generation to fit inside the cache.
In the end, no general automated scheme will outperform the theoretical best job you can do with application-specific knowledge and a hand-crafted allocation regime, but, you know, in general no high-level language will outperform hand-crafted assembler either, and I don't see many people still saying "C sucks! Stick to assembler and stop wasting cycles". It's all about bottlenecks, and it really doesn't sound to me like a problem with the conceptual nature of GC is the bottleneck here.
It's specified here: Extension Versioning. If I missed something about that topic, please point me to the section.
What I wonder is: how can a GC ever be faster than manual memory management or reference counts, when it re-uses memory later (and thus is less cache-efficient)? This is the question that no one answered to far. It doesnt matter if you need to chase pointers, this is almost completely irrelevant compared to cache efficiency for short-lived objects. You must be doing *really bad* manual memory managements if you release your memory as late as a GC collects it. Reference counts should provide almost perfect results (unless you have the famous cyclic references, but that should not affect more than 10% of all objects in a language like Java).
BTW concerning JWZ: all he says is that Java as a language does a bad job of helping the GC and does not offer the application developer to provide hints. Definitely true, if the byte code would deliver hints better about the life-span (or would use a stack-based allocation equivalent, like C# offers IIRC) it would certainly have less problems.
I know many smart people that seem to love Java, and I think I know why, Java is easy to use. The one problem with Java, and why it will never become the major main stream language that some people want it to be is that it is a rubber safty knife. Now hear me out. Now Java prevents you from doing many things that C++ will happly let you do (read: Pointer Arithmatic). With this, come a lot in the way of overhead. And also, lets not forget it is, in most cases an interpreted language, and two identical programs, one compiled and one interpreted, the compiled one will always be faster. While in a few cases Java may have it's place, it is more of a novalty then anything else, and for most cases can be replaced with something that would do the job better. Flash and PHP for the web, C++ for apps. Even C++ is portable if written correctly. Give a few years, and a little luck, every OS will be Posix complient then the major selling point of Java, write once run anyware (AKA: Write once, run NO WARE, as we called it back in '97) will become meaningless..
No, of course they weren't. Heck, back in those ancient days we'd barely even got used to using transistors instead of valves.
Sheesh, five minutes with Citeseer is adequate to demonstrate that GC researchers were studying locality and cache issues even back in the stone knives, bearskins and jumpers for goalposts days of 1991. Here's two clues: Zorn and Wilson. It's not the case, pleasing though it may be to imagine, that everybody was living in blissful ignorance of the issues and nobody even realised their computers had caches until Linus Torvalds pointed it out on a GCC mailing list.
For what it's worth, there's an article on JVM GC tuning for 1.3.1 at. We can infer from that that the GC is generational, that minor collections copy, that the default sizes are probably pretty screwy for most purposes, and that the GC still stops the world whenever it kicks in. These are all techniques that were known in 1994
Let's say you have a lot of memory, and a large heavily interconnected data structure. As you do your processing on the data structure, you remove many nodes, leaving you with a really sparse data structure. It remains heavily interconnected.
Now you have to free the data structure. Remember, it is now a sparse data structure, with, let's say, only very small elements left.
In the best case for a copying garbage collector, your GC has already been shrinking the working set of your application as you thinned out the data structure, so you're left with a few pages of very small objects nicely lined up. Freeing the data structure is just a matter of substracting the size of the data structure from the live object pool size.
In the worst case for manual memory management, you're left with small bits of data spread out over all pages of memory (and some on the swap, for added speed penalty). You have to go through every page of memory in the system to free the data structure.
Hans Boehm did a presentation on memory allocation myths.
How many times will I have to listen to people tell me how bad Java is? As a long-time Java developer I am constantly bombarded by people telling me that Java sucks because it is interpreted, because its garbage collection is slow, because it uses too much memory, because it has a bad name, because Scott McNealy smells...all the while I have been writing server- and client-side Java applications right next to my Perl, Python and PHP scripts, shell scripts, and whatever else turns out to be the right tool for the job.
Enough already! I use Java. It works. I have built many applications using Java and they are used every day in day-to-day business applications. Sure, Java is not perfect, but it is quite adept at handling most of my programming needs. If I ever need C or C++ then I will use them.
Please, lay off the rhetoric because I am tired of hearing it.
"Take these issues seriously" posted on Feb 11, 2002 @11:30 by Matthew Pierret.
I am a big supporter of Java and have used it exclusively for six years. I don't want to go back to any other language. But the cavalier attitude toward backward compatibility, gratuitous API changes and the introduction of bugs in new releases is unacceptable in a production environment. Putting the release cycle under greater discipline (including comprehensive backward compatiblity regression testing) with an emphasis on maintaining backward compatibility would be a significant improvement.from Developer.sys-con.com
While some may use the memo to try to undermine support for Java, dismissing this memo does not serve the interests of the Java community. The issues are real and make it that much harder to counter the voices calling for a homogeneous Microsoft world.
Maybe looking at the code may get some things sorted out, though. But I doubt that will ever happen since Sun owns Java and plans to keep the source locked.
Also, I noticed Java slows down my machine down to a 386 class CPU whenever I load Forte, it is just soooooo slow. Slower than Visual .Net load time.
Regarding GC issues, maybe Sun mgmt. needs to listen to their engineers and let them pick which one is the best GC, maybe adopt the Ruby GC, for example.
I've done a fair amount of Java programming, enough to learn to do some fancy things with it, but I must say I prefer C++.
The general reason I prefer C++ to Java (or most other languages) is that I feel it enables me to be the best developer I can be.
I'm sure that many excellent programmers program in Java, but my feeling is that the very best C++ programmers can produce better results than the very best Java programmers.
However, I think Java is popular with management because it is easier for an average programmer to get something working sort of OK with Java than it would be for the average programmer to do it in C++.
The sad fact is that many programmers just aren't that skilled. Some folks aren't that smart, some folks aren't very experienced, and some haven't had adequate education.
It took me a long time to become a good C++ programmer. I'm not trying in any way to claim that C++ is easier than Java. It is a powerful and dangerous tool that yields great rewards to those who master it.
Milling machines are powerful and dangerous tools for metalworking, and you certainly shouldn't try to use a milling machine without learning how, but if you do know how you can do things like make car engines out of hunks of metal.
A common misconception about garbage collection (and by extension, Java programs) is that garbage collected software does not suffer from memory leaks. Nothing can be further from the truth. My experience is that memory leaks abound in Java applications.
How can this be? you protest. Well, I'll tell you. You just have objects that contain references to memory you don't need anymore. The memory these objects reference won't get garbage collected.
However, a leaky java program will probably still mostly work. Solutions I have seen used in web server production environments include such things as killing and restarting the VM every day or so.
When I was a smalltalk programmer, we used to save memory dumps into executable image files. That way we could restart our development environment right where it was when we shut down the PC the previous evening. It was nice in theory, except for the memory leaks. After about a week of dumping and restarting a given image, performance would become so slow as to be unusable.
I resolved this by keeping two images, one that I would keep in a virgin state except for weekly updates from the source code repository, and the other that I would copy from the virgin one each monday and only use for a week.
Yes, it is harder to manage memory in C++ but I feel that it is the proper job for a developer to take personal responsibility for his memory consumption.
It really is a better way to live. Efficiently managing your own memory will make you a better person.
I used to have a lot of trouble with C++ memory leaks until I started using the Spotlight memory debugger from Onyx Technologies. When I saw how leaky my programs were I started learning more about how to manage memory. After a while it became such second nature that I write code that is leakproof without having to pay much conscious attention to it.
I wrote an article about a refactoring process as I learned to manage memory better.
You might protest that only incompetent programmers create memory leaks in Java code but I have heard that Java libraries distributed by Sun for production use contain leaks - even Java meant for embedded use, where memory is precious.
I'm sure that skilled Java programmers can write leakproof applications, but I think the problem is that most don't feel the need to learn the discipline because they think the language will take care of them.
You should understand that I advise neophyte programmers to learn Java as a first language. It is easier for the inexperienced to get something working in Java. But I also advise them to learn better languages when they become good enough that they want to write programmers for real end-users to use.
Flame on,
Mike
I completely agree with the assertion that java's the reason for Freenet's current lack of succes.
Okay. Freenet is trying to solve a hard problem. But my perception is that their reliance on java really doesn't improve the development speed or the performance of the network.
Every now and again, I try to install the damn thing on my servers. Already I'm annoyed about it pulling in all kinds of dependencies (like x libraries), but when I see the cpu overhead, I cringe even more. I'm longing for the days when it works with gcj, hoping that will solve some of it's problems someday.
It's probably partly perception, but I much much rather run some python-based server code, or even a quake3 server on my machines. I just feel really uncomfortable with java code on servers.
I still remember when Java was this great promise, and reading in my java books that there was a reasonable tradeoff between performance, and ease of use. I still think the language is rather nice (like rather easy threads, interfaces instead of multiple inheritance, etc), but the VM is slow, and the runtime/classes are worse than bloated (a lot of people dislike swing).
Someone mentioned applications, like Limewire (which I knew of), and colomba-mail (which was a rather positive surprise). But even with these apps the startup times are horrendous. And yes, startup times are important, and are becoming imho even more important in the future.
Even though you mention these nice examples, the fact is that these apps, as nice as they may be, are more the exception than the rule. Considering how many people are programming Java these days (it was taught in universities all the time during the later nineties, and it's a very marketable skill), and considering how easy it's supposed to be to make large apps with it, it's exceptional to see how few actual usable programs came out of this.
Perhaps you should try Kaffe. If you've got a problem with the implementation, it's open source, and you can fix it. There probably a half dozen other free implementations out that are quite nice as well.
Kaffe has stayed relatively small, and a lot of people are using it for embedded targets.
Kaffe may be trailing a bit in getting the newer APIs implemented, and it's GC and JIT subsystems are more simple than state-of-the-art, but it's fun to work with.
The nice thing about playing aound with a VM implementation source directly and trying to understand it is that you start to learn why things are the way they are in Java, and where the "traps" are.
There is a lot of really good Java code out there. But you really have to keep a close eye on things when you have tight memory requirements. A lot of Java library code was not written for environments where memory consumption was a big concern - so a random chunk of code may make certain size/speed tradeoffs that might not fit your needs.
If you write your own code, you can do almost as good as native code on applications over 1MB in size. Frankly, most native code (eg. C and C++) does a piss poor job of memory management - most GC implementations will do a better job, and automatically. Of course, a clueless programmer is going to have troubles in either environment.
For embedded programming, I've found Java to be really nice. Say goodbye to segfaults. And if you are sloppy, things degrade nicely - things just get slower as GC kicks in more and more often (assuming you fix the heap size). Plus the commercial debugging and profiling tools available are simply incredible - you can literally see everything.
There is a lot of cutting edge work being done on advanced multi-heap, thread-local heap, fully concurrent GC implementations. Projects such as KaffeOS, JanosVM, and RTSJ have developed ways to gain even more control over the GC process - so even realtime processing with Java is within grasp.
Sun does deserve some criticism for adding a lot of bloat to their VM libraries. The core VM doesn't seem to bad, but Swing apps in particular hurt. IMHO, they never really got their UI house in order - they just added a lot of bloat. Fortunately, for free software types, one doesn't have to stick with the traditional solutions. eg. Over the years, with Kaffe, there have been literally dozens of windowing toolkits stuck on top of it...
And Java can be compiled either AOT (ahead-of-time), eg. with GCJ, with a JIT, or even interpreted. That's a lot of flexibility. And with advanced compilation techniques, the quality of the generated machine code can be really high. There's a lot of research going on into optimizing compilers for Java, as it's a big market, and also the core problem space is small enough that it's a great space to experiment in if you're a compiler freak. At the end of the day, I truly believe that dynamic compilation techniques coupled with advanced control systems theory will eventually prove to generate faster, more optimal systems than anything static compilation techniques will be able to achieve. Static compilation has to live with all the invariant assumptions they have to make ahead of time with no feedback loop.
The future of application programming lies in separating the model from the UI. If you get into serious modelling work, it's hard to beat the flexibility and the tools that are available to today's Java developers. The language is a natural for modelling and refactoring, and IDEs such as Eclipse and NetBeans are incredible (and free). There are literally dozens of application server frameworks aggressively fighting for mindshare on what is the best way to develop Web applications -- see Apache's Jakarta project. Sun's influence has been positive in many ways, as even though there are zillions of solutions, there are standard APIs for things like XML, SQL, and servlets that everybody can agree on. It makes it really easy to mix-and-match and play to come up with the best way to do things.
I've seen some pretty incredible UI front-ends on Java. I think the future for UI design lies with XUL, XForms, and other XML-based UIs, such as Glade, SVG, SMIL, etc. If you want to play with some free software, take a look at Luxor, and X-Smiles. UIs of the future will the standards-based, themeable, and very cool. Java will be there, since it's really good at XML and dynamicism.
Cheers,
- Jim
Read the memo. Sun said that one particular implementation - the Solaris JRE - sucks, and for reasons mostly to do with the development, release process and upgrade paths. That was not a criticism of the Java language.
(That's quite apart from my own personal opinion, of course. If java "is a natural for modelling and refactoring", I really don't want to know what contortions you're willing to put up with before you consider something "unnatural" :-)
It's important to look at how Java is really being used. Mostly, it's for web applications, which consist of a fairly thin layer of glue between the web server and some data stores.
*sigh*
I wish it were so simple. Back at work, we're writing a Web service (in Java, of course) that has to digitally sign XML messages. Signing a few hundred bytes of XML with a 2kbit key is not just slow -- it takes more than a second! Even replacing the existing crypto provider with a native one that uses openssl doesn't help, because JNI sucks rocks performance-wise.
So no, Java's performance is not acceptable for anything but the most brain-dead Web apps, and that's not taking into consideration the memory footprint on the server...
My impression is that Freenet got out of control because it gave everyone and their brother commit access and the codebase quickly became bloated and unmaintainable. It had almost nothing to do with the choice of language.
How can they? :) Actually I said that.
And looking at it language-wise, it doesn't really suck. However, the VM's I've tried such as the IBM JDK 1.3 and blackdown (I also used some stuff on windows in the old days), didn't seem to be high performance at all. Also, I have a gut feeling that the VM's don't just perform badly because of deficient implementations but because of Java language and VM specification. Even Java chips can't rescue Java (Have you ever heard of an ocaml chip or a C++ chip?)
Most notable is I think Forte. It's like a joke. I never got to run it properly and it consumes ridiculous amounts of resources. It really turns your machine to the previous generation.
I'm not a great fan of C++ either, but at least it is a general purpose language. That's to be commended.
When Java first came out it was: Java is the programming language of WWW. Then, Java will run everything from desktops to embedded devices. Then they said it's good for web servers, and now I think they're going handheld or whatever god knows (maybe it's really good for handhelds I don't have any experience with the one in Zaurus or anything like that). That shift of focus is not very promising.
On the other hand, maybe we should give more time to Java. Look how many years it took for the first real C++ compilers to come out. Maybe it's just a matter of time.
Still, I can't imagine using Java for anything that requires some sort of an algorithm more complex than quicksort. I can picture in my head GUIs, network agents and that sort of thing but I can't see Java accomplishing all sorts of stuff.
The more annoying thing is Sun people coming to universities, convincing/bribing/whatever people and then changing the curriculum to Java. So universities drop an ISO standard and adapt Sun's standard. Then Microsoft comes and tries to change the curriculum to C#...
That's crazy. That's not even in the best interest of Sun or Microsoft. Maybe it's better for them than it's for others but then again all CS101/CS102 assistants complain about students not understanding any basic idea about programming because students think it consists of Java's oddities.
Ah, and that's another thing they said "Java is good for education". Really? I don't think any C variant is good for educating the principles because let's face it please, C is *not* a principled language. It's the ultimate product of hacker mind set, maybe, but it isn't a good programming language design. It lacks even very basic concepts such as nested scopes (function in a function) and all derivative languages share some of its limitations. The one that tried to extend it sometimes made it too complex and ugly (like C++) and sometimes they failed on the performance and system integration part (Java, maybe C# <-- they say it's better) I would definitely choose Pascal or LISP (not very intuitive) to teach programming. And maybe let them work with some toy languages before that. (Our instructor had done that)
paraphrashing John Backus's 1978 Turing Award Lecture Can programming be liberated from the von Neumann style?. C++, Java, ect, ect, ect, ... are all von Neumann languages acording to his clasification.
I've convinced myself these languages are relics of past. When you have used languages as elegant as Forth, Lisp, or ML, it is hard going back to what seems a more primitive tedious form of programming.
Erm... the "basic" ideas of programming? There are at least two views of programming: one is that a program specifies what to compute, another is that a program specifies how to compute something. Both views are equally valid, so I'm not sure if there'll ever be an agreement on what constitutes the "basic" ideas of programming.
And isn't Forth a von Neumann language too?
What's so great about Forth language is that its inventor Chuck Moore split his living between designing his own chips and writing his own Forth. The reason von Neumann becomes von Neumann as we know it in pioneering computing world is because Uncle Sam happens to lend von Neumann his big pocket.
I'm afraid that the real, and interesting, issues in the memo are being obscured by run-of-the-mill language flaming.
First, let's separate Java-the-language from Java-the-class-library. Generally, I like the language, but from what I've seen of the class library, it's fairly bloated, contains numerous infelicities, and isn't getting that much better. The root of the problem, of course, is that it's controlled by Sun.
Sun's implementation has performance problems, for sure. The way to fix that is to have healthy alternative implementations. Unfortunately, these have had a hard time gaining traction, largely because of the need to reimplement the huge class library in addition to the language proper.
Any time you adopt technology controlled by a single corporation, you run the risk that its defects won't be easily fixed, and (more importantly) that its evolution won't go in the direction you like.
So there are at least two separate issues here. Even if one accepts that Java is beautiful and elegant, the politics and processes around it are not. People who are making a decision to adopt Java should know what they are getting into, and shouldn't restrict themselves to only looking at the technical issues.
exa: I don't share your "gut feeling" that the performance problems are caused by the language and VM specification.
I do think that Java is a reasonable language for teaching programming, because the language itself is pleasingly simple, especially compared with a monster like C++. And, in this domain, the performance issues are much less important. I personally like Python best as a first language to learn -- it's what I'll be teaching my kids.
People like to classify languages as either declarative or imperative. From my experience, it is hard to find a truely declaritive language. Even in a language like Prolog, which people claim specification meets program, you must know the operational behavior, how the proof search is done. Termination becomes an issue, as well as efficiency. An operational cut feature is used to trim the proof tree.
Backus I think is providing us with more a useful classification scheme. In the first 6 pages of his report, he gives a consise description of what he terms the "von Neumann style", I won't atempt to repeat it. However, characteristic of von Neumann languages are sequences of statements which act as transitions on a complex state. He describes this as word-at-a-time programming, the assignment statement being the most prominent feature. Interestingly he calls these languages "fat and flabby", which have a quite large set of framework features to compensate for the essentially word-at-a-time style. It seems this is still quite of true of languages being created to this day.
Where does Forth fit in? Forth, paricularly older ones, certaintly do contain remnants of these von Neumann languages such as control structures. However the basic model is function composition. Words are functions from stack to stack, and new words are defined as the composition of zero or more words. Forth has store and fetch, which Bacus terms as historically sensitive operations. Of course Forth can be extended for other types of programming, so it doesn't really stay in any one catagory, although it is esentially non von Neumann. It should be noted, Lisp and ML dialects also have von Neumann features.
I don't want to be misinterpreted, I'm not trying to contribute to a language flame fest as raph seems to sugest. Hopefully, people can utilize this information to look at programming languages in a new light. I think people generally exchange one von Neumann language for another without really testing the waters else where. I realise though, language is often mandated, and the choice is not the programmer's to begin with.
"Reasonable for teaching programming" forsooth. Here's the simplest possible Java program:
class Hello { public static void main(String[] args){ System.out.println("Hello, world!"); } }
Let's see how many concepts we encounter: class definitions, method definitions, access modifiers, the static modifier, return type declarations, the void type, parameter declarations, the String type, array types, command line arguments, the System class, static fields, the PrintStream class, the println method and string literals.
I don't think any teacher is going to go through all these concepts before letting the students read and write their first programs. So the only option is to say "ignore all this extra stuff for now, we will deal with it later". But in effect this means "you aren't supposed to understand this program, don't even bother". And this is extremely demotivating for a beginning programmer.
Java just has too much crud obscuring the real essence of programming from students.
Java just has too much crud obscuring the real essence of programming from students.
Sorry about that.
Too much crud compared to what? C? Hello world in C has all of that same stuff except the class but use include headers. C++? Same plus a namespace. Pascal? I'd say it's about the same.
The one thing I liked about Java was that it's so light weight (typing wise) to create new classes. It has always felt to me like doing it C++ was much more involved. You create a header, you define your members, you create an implementation file, you implement your members... It just seems like a ton of work when you're in the thick of it all.
I'm glad that my flame-ish comments generated enough interest for a useful discussion :)
I didn't re-read the lecture but of course I remember it as it has a special place in the history of programming language religion wars :) [Hmm, I should have a look at it now]
What he said was writing in an imperative language basically consisted of sequencing the ins and outs through a pipe which is the bus between processor and memory. Stuck there at the Von Neumann bottleneck.
The unavoidable conclusion is that imperative languages strongly favor a particular computer architecture since the most important thing in the language, assignment operator, models data flow over the bus.
On the other hand, it is possible to design a functional specification and let a translator decide how to compute it on given architecture. That is a more high level approach and it has the advantage of being architecture independent. Ideally, functional languages would free programming of the low level approach of imperative languages and allow us to write code that would perform well on various architectures.
However, it should be noted that today: 1) The functional languages aren't necessarily easy to learn or use 2) They are not yet truly tested in parallel computers (although there are things like ghc for pvm)
On the other hand, functional languages are equipped with a lot of other great features such as advanced type systems and elegant semantics to achieve a much higher level of abstraction than any imperative language so much of what Backus said is very relevant.
Still, it remains to be seen whether there are even better ways of programming (remember programming by example page at MIT?)
I'm not saying anything new here, but still...
Less crud in hello world. These are complete legal programs:
- Scheme
- (display "Hello, world!\n")
- Python
- print "Hello, world!"
- Perl
- print "Hello, world!\n"
- O'Caml
- print_endline "Hello, world!"
- Haskell
- main = putStrLn "Hello, world"
Beginning to see a pattern? Next, class verbosity:
- O'Caml
- class hello = object method hello = print_string "Hello, world!\n" end
- Python
- class Hello: def hello(self): print "Hello, world!"
...etc. If you try to argue in favor of Java by comparing it to C++ then, well... "There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy."
Maybe this is the subject for another article!
I agree with nether about Java distracting students from what is really important in a programming language. There are even worse ways students get confused about programming languages. I think most of the students think programming is "using that IDE" or other particulars of a Java environment they are given.... while all they should think about is semantics of a written program text. ASCII text, that simple. And an arrow that goes from that string to a meaning in their minds. Very important for Computer Science students.
My basic feeling after 7 years into OO languages is that OO is not a very important development in programming language semantics, and it is definitely not the core of programming either. The really interesting thing is that most languages, like Java or C++ have deficient object systems coupled with weak imperative languages so I think they are doubly borked.
Now, let's think what object systems are good for. They are good more for component/architecture mindset, for developing large programs, re-usable code, better abstraction, etc. And therefore they are good for doing system-related stuff like GUIs, XML, and what not.
Some part of computer science is really interested in that kind of stuff, there are software engineering guys, and then there are UI guys, etc. They would be interested in getting that kind of programs right, so OO has a significance in the practical world where most of the existing code would be C++/Java...
Nevertheless, real computer science is less concerned with system stuff. From the theoretical point of view a large percentage of all code written is triviality, just moving chunks of data from one place to another. For something to be of interest to a computer scientist, there must be an algorithm.
And there I think the requirements get a little complicated. I do think that students must be thought one good procedural language in which they can program algorithms. On the other hand, I think they should be thought a functional language like LISP or ML so that they know what's there in the world.
I don't think C, C++ or Java are good procedural languages. They have a horrible design and I think a programmer is good only when he recognizes how ugly and disturbing C derivatives are!! (So I do think that I'm a good programmer, hmmm) In fact, C sucks in almost every major aspect of design such as orthogonality, type system, modularity, etc., maybe does a little good job at readability/writabiliy bit when used right.[*] The ideal procedural language would be ALGOL 68 for me but I can't think of a widespread compiler for that right now, so it seems that Pascal or Wirth's later languages would be a better choice. The difference between Wirth and Stroustroup? The former is a language designer. (It's said that the latter is a mathematician but I don't know if he contributed anything useful to mathematics. Though he has written a lot of industrial code)
Anyway, for the functional language COMMON LISP might actually be a little confusing so Scheme might be prefera0ble in the LISP family. I definitely wouldn't have any freshman use Haskell before it becomes mature enough. (But for a 3rd grade course it would be fantastic). In the ML family "caml light" would be a good starting point and with a great language processor things can get only that good for a new student. I think that's the language that will have him feel a little magic beneath his finger tips.
So, what does a freshman need to learn first? Thinking about it: 1) Type system 2) Literals 3) Variables, binding/scoping 4) Functions 5) Compositional semantics 6) How to implement an algorithm... 7) I/O etc...
Thinking about it either they should be thought one procedural and one functional language or a language that supports both paradigms (that is NOT C++) like caml. ML family being quite orthogonal, it's also possible to take a subset of caml and use that easily.
Even before that, something like LOGO would be useful and fun. I think the ability to do some graphics does a good job at drawing interest to the subject matter.
Enough for now ;)
[*] I think Java fixed modularity, C++ couldn't.
I recommend all Javur bashers read The Language of Tomorrow by Miles Nordin. It's an amazing piece of writing, and was featured rather heavily on the crackmonkey list a year or so ago. I love these tidbits especially:?
It's always been very fashionable to heap criticism on C, and (when the time came) on C++. For some reason, though, they keep seeming to be the most practical tools to use for the hardest jobs. You can't, in good conscience, chalk that up to historical accident (as you could for, e.g., MSWindows) -- their adoption has always been driven by technical reasoning, and usually in direct contravention to management, and even federal, decree. (Does anybody remember Ada?) C won over Pascal and its derivatives, and C++ won over the other languages that came out around the same time (e.g. Ada, Eiffel and ObjC) fair and square, even starting at a disadvantage.
There must be something fundamentally right about C, and C++, at least for programming von Neumann machines, that any language you hope to replace it with had better share. Whatever it is that they got right, it must be *really* right to overcome all the deficits that are so fashionable to keep pointing at. If your language doesn't have that thing, then no matter how elegant you think it is, or even how elegant it really is, too bad.
When Alex Stepanov designed the STL, a subset of which was adopted into the Standard C++ library, he tried first implementing it in many languages, including Lisp, ML, and Ada. ML and Lisp were capable, but were not possible to make run at more than a small fraction of the speed possible in C++. (See.) He took that as to indicate a fundamental kind of power not only in the C++ template system, but in the abstraction represented by C pointers that he was able usefully to extend to all other sequences. (Pointer analogs, called "iterators", fill a role as pervasive in the STL as lists in Lisp.)
I don't know if C pointer semantics are the magic juju, but I do know that dwelling on the faults C and C++ doesn't teach you much about why they have been so successful anyhow.
(Getting back on topic for a moment, what Stepanov says about Java in the article is also interesting.)
Maybe the problem is that nowadays people only build machines to run C code fast, and that habit handicaps other languages. If only some other language were King, machines would be different and C would be at a disadvantage on them. Curiously, on the Pentium 4 the canonical C statement "while(*p++=*q++);", compiled naïvely, is said to be very slow. In fact, most machines have had some variation on this problem -- an unfortunate stack frame layout, awkward shift or integer-overflow semantics -- and compilers have just had to work around them. CPU architects have been as subject to C-bashing rhetoric as the rest of us.
So, getting back on topic again... whenever problems with JVMs, or the Java libraries, come up, people are always quick to say those failures are incidental, and don't indicate fundamental problems. After all this time, if those incidental problems still haven't been resolved, maybe it's time to consider whether there's something fundamentally wrong that (as with the Sirius Cybernetics Corp's artificial people) is merely masked by the incidental nuisances.
I think maybe that's what Exa meant to suggest.
...IMO is that it convinced the world of the importance of VMs. There are problems with the JVM, but it is indubitably both intellectually one of the seminal virtual machines, and practically possessed of many great strengths. I don't think so much of Java the programming language, but the JVM Python interpreter is very mature. There is even a good scheme implementation. All we need is a good Common LISP implementation...
Also it convinced quite a few people that a practical programming language could and should have GC.
While I did mean that there might be fundamental problems with Java language and JVM specification I don't think you can explain C/C++'s widespread use due to successful language design.
Popularity does not necessarily imply excellence. It does imply being adequate though, and that's why I think there aren't many important Java apps around. It's simply not good enough. Maybe it will be in the future, maybe not, but for many years it has not been so and this isn't very promising.
The success of C and C++ is adequacy. C was simple enough and it worked well. C++ brought some advanced concepts down to mainstream from research labs and that is applauded more than is necessary I think. As a result, people could put their hands on this stuff and used it to write larger apps. And when some language dominates, people don't look for others too actively. Maybe most people want to learn only one or two languages. Look how most programmers don't even understand all important features of C++, or write terribly bad things using it (Ever used MFC?)
C is basically a half-cooked procedural language that was designed to be sort of a portable assembly. That's all it is. Look how preprocessor macro laden it is. It's just assembly. It's that crude.
C++ generics is a joke. Just because Alex couldn't get it right doesn't mean C++ is the perfect combination of efficiency and genericity. C++'s template specification and the terrible implementations do not equal to genericity of any sort I would expect from a modern language. And what use is genericity in a language without proper modularity? It's really pathetic.
Besides, I'd like to remind ncm that pointers are not unique to C or its descendants. They are not a big thing either, it's just an abstraction of address registers and storing addresses in assembly.
Going back to STL iterators, that must be the worst software abstraction in the entire world. Could it be done better in C++? Maybe. Would it be worthwhile? I don't think so. So, STL still might be a good thing for C++, but it is not a crowning achievement in writing a package with some data structures together with some algorithms.
Especially, the so-called functional aspect of STL is not very usable in practice. It is useful as an imperative tool, though.
I myself had a chance to implement the infinitely stupid idea of "template expressions" for a parallel array library and found myself wrestling with a mountain of unmaintainable code.
There is something else I'm trying to say, I think.
Wake up! It's 21st century! C++ is dead, long live ocaml!
Thanks for listening!!
I'd read the HP paper. I had found it interesting 5-6 years ago (Ah, that's where STL came from!) I was a great fan of allocator stuff and 0 overhead principle but it has lost its appeal to me something like 2-3 years ago when I began to interpret the whole STL thing as another template hack. It of course is a useful thing, and I use it all the time when I'm writing C++ apps, possibly putting it to better use than most of C++ programmers but I don't think it's the ultimate programming approach.
I definitely disagree with Alex on that "iterator" is an important concept in Computer Science. No! I don't see why you should even be bothered with iterators when you have higher order functions, it makes 0 sense (to anybody who has some experience with ML family) Iterator is an important concept in STL, of course, because they wrote it like that.
Henry Baker in "Iterators: Signs of Weakness in Object-Oriented Languages (1993)" makes the point that iterators depend on internal state and side-effects - this makes it very difficult to prove anything about them, and also creates problems using them in multi-threaded environments, unless the compiler can prove there is no sharing involved. Source links appear to be broken, but citeseer's cached versions are still available.
chalst: Mapping Common Lisp semantics onto the JVM is an extremely difficult problem, since JVM was designed to cope with relatively powerless languages such as Java. Scheme has its share of problems too; I don't think there exists a Scheme->JVM compiler that completely implements call/cc, for example.
ncm: You can't even begin to compare C++ STL to Common Lisp. Why would you even implement such a thing in Lisp, when Lisp already has much better facilities? And lets you take advantage of features that C++ programmers can't even dream of?
As for GCs and cache; I have two words: `fragmentation', and `copying'. The former with regards to manual memory management, and the latter with regards to GC.
... where it belongs, please ? ;) It's already degraded into 'my favorite language is better than yours', and I'm only waiting for someone to claim that Prolog is superior to PL/I ...
from Guillaume's link, dig out this old article #207 on C++, published Dec 2000 by egnor. Buillaume says "I think the reasons why C++ and free software don't mix too much are mostly cultural". I think Java is popular mainly because people majored in English or studied English as a second language need to write something for a living.
sye, can you explain in more depth what you meant? one. such an implementation.
crackmonkey: Its called an executable JAR and it makes executing a Java application as simple as either double clicking the JAR or executing the following command line:
java -jar MyJar.jar
Everything is available in the Java toolkit for writing good applications. More often than not it is the programmer's fault when any code works poorly.
nether: I agree that Python is really good for introducing programming to students - I have used it in such a role and have found that the students appreciate its clean syntax. You can't beat
print "Hello, world!"
can you? However, I don't think that Python's OO support holds a candle to Java. Java starts to shine when you are creating more complex applications which must be extensible yet restricted by a specific contract.
Iterators are very interesting because they are abstractions of lists. This allows what would normally be list operations to operate on more dynamic structures, such as input, the digits of pi, or the output of other functions. List operators usually have to rely on fixed-size lists. Lazy lists may be an exception, but I would have to look into it more (I have not done a lot of research into lazy languages).!
|
http://www.advogato.org/article/624.html
|
crawl-002
|
refinedweb
| 9,619
| 62.07
|
Premium block blob storage accounts
Premium block blob storage accounts make data available via high-performance hardware. Data is stored on solid-state drives (SSDs) which are optimized for low latency. SSDs provide higher throughput compared to traditional hard drives. File transfer is much faster because data is stored on instantly accessible memory chips. All parts of a drive accessible at once. By contrast, the performance of a hard disk drive (HDD) depends on the proximity of data to the read/write heads.
High performance workloads
Premium block blob storage accounts are ideal for workloads that require fast and consistent response times and/or have a high number of input output operations per second (IOP). Example workloads include:
Interactive workloads. Highly interactive and real-time applications must write data quickly. E-commerce and mapping applications often require instant updates and user feedback. For example, in an e-commerce application, less frequently viewed items are likely not cached. However, they must be instantly displayed to the customer on demand. Interactive editing or multi-player online gaming applications maintain a quality experience by providing real-time updates.
IoT/ streaming analytics. In an IoT scenario, lots of smaller write operations might be pushed to the cloud every second. Large amounts of data might be taken in, aggregated for analysis purposes, and then deleted almost immediately. The high ingestion capabilities of premium block blob storage make it efficient for this type of workload.
Artificial intelligence/machine learning (AI/ML). AI/ML deals with the consumption and processing of different data types like visuals, speech, and text. This high-performance computing type of workload deals with large amounts of data that requires rapid response and efficient ingestion times for data analysis.
Cost effectiveness
Premium block blob storage accounts have a higher storage cost but a lower transaction cost as compared to standard general-purpose v2 accounts. If your applications and workloads execute a large number of transactions, premium blob blob storage can be cost-effective, especially if the workload is write-heavy.
In most cases, workloads executing more than 35 to 40 transactions per second per terabyte (TPS/TB) are good candidates for this type of account. For example, if your workload executes 500 million read operations and 100 million write operations in a month, then you can calculate the TPS/TB as follows:
Write transactions per second = 100,000,000 / (30 x 24 x 60 x 60) = 39 (rounded to the nearest whole number)
Read transactions per second = 500,000,000 / (30 x 24 x 60 x 60) = 193 (rounded to the nearest whole number)
Total transactions per second = 193 + 39 = 232
Assuming your account had 5TB data on average, then TPS/TB would be 230 / 5 = 46.
Note
Prices differ per operation and per region. Use the Azure pricing calculator to compare pricing between standard and premium performance tiers.
The following table demonstrates the cost-effectiveness of premium block blob storage accounts. The numbers in this table are based on a Azure Data Lake Storage Gen2 enabled premium block blob storage account (also referred to as the premium tier for Azure Data Lake Storage). Each column represents the number of transactions in a month. Each row represents the percentage of transactions that are read transactions. Each cell in the table shows the percentage of cost reduction associated with a read transaction percentage and the number of transactions executed.
For example, assuming that your account is in the East US 2 region, the number of transactions with your account exceeds 90M, and 70% of those transactions are read transactions, premium block blob storage accounts are more cost-effective.
Note
If you prefer to evaluate cost effectiveness based on the number of transactions per second for each TB of data, you can use the column headings that appear at the bottom of the table.
Premium scenarios
This section contains real-world examples of how some of our Azure Storage partners use premium block blob storage. Some of them also enable Azure Data Lake Storage Gen2 which introduces a hierarchical file structure that can further enhance transaction performance in certain scenarios.
Tip
If you have an analytics use case, we highly recommend that you use Azure Data Lake Storage Gen2 along with a premium block blob storage account.
This section contains the following examples:
- Premium block blob storage accounts
- High performance workloads
- Cost effectiveness
- Premium scenarios
- Getting started with premium
- See also
Fast data hydration
Premium block blob storage can help you hydrate or bring up your environment quickly. In industries such as banking, certain regulatory requirements might require companies to regularly tear down their environments, and then bring them back up from scratch. The data used to hydrate their environment must load quickly.
Some of our partners store a copy of their MongoDB instance each week to a premium block blob storage account. The system is then torn down. To get the system back online quickly again, the latest copy of the MongoDB instance is read and loaded. For audit purposes, previous copies are maintained in cloud storage for a period of time.
Interactive editing applications
In applications where multiple users edit the same content, the speed of updates becomes critical for a smooth user experience.
Some of our partners develop video editing software. Any update that a user makes to a video is immediately visible to other users. Users can focus on their tasks instead of waiting for content updates to appear. The low latencies associated with premium block blob storage helps to create this seamless and collaborative experience.
Data visualization software
Users can be far more productive with data visualization software if rendering time is quick.
We've seen companies in the mapping industry use mapping editors to detect issues with maps. These editors use data that is generated from customer Global Positioning System (GPS) data. To create map overlaps, the editing software renders small sections of a map by quickly performing key lookups.
In one case, before using premium block blob storage, a partner used HBase clusters backed by standard general-purpose v2 storage. However, it became expensive to keep large clusters running all of the time. This partner decided to move away from this architecture, and instead used premium block blob storage for fast key lookups. To create overlaps, they used REST APIs to render tiles corresponding to GPS coordinates. The premium block blob storage account provided them with a cost-effective solution, and latencies were far more predictable.
E-commerce businesses
In addition to supporting their customer facing stores, e-commerce businesses might also provide data warehousing and analytics solutions to internal teams. We've seen partners use premium block blob storage accounts to support the low latency requirements by these data warehousing and analytics solutions. In one case, a catalog team maintains a data warehousing application for data that pertains to offers, pricing, ship methods, suppliers, inventory, and logistics. Information is queried, scanned, extracted, and mined for multiple use cases. The team runs analytics on this data to provide various merchandising teams with relevant insights and information.
Interactive analytics
In almost every industry, there is a need for enterprises to query and analyze their data interactively.
Data scientists, analysts, and developers can derive time-sensitive insights faster by running queries on data that is stored in a premium block blob storage account. Executives can load their dashboards much more quickly when the data that appears in those dashboards comes from a premium block blob storage account instead of a standard general-purpose v2 account.
In one scenario, analysts needed to analyze telemetry data from millions of devices quickly to better understand how their products are used, and to make product release decisions. Storing data in SQL databases is expensive. To reduce cost, and to increase queryable surface area, they used an Azure Data Lake Storage Gen2 enabled premium block blob storage account and performed computation in Presto and Spark to produce insights from hive tables. This way, even rarely accessed data has all of the same power of compute as frequently accessed data.
To close the gap between SQL's subsecond performance and Presto's input output operations per second (IOPs) to external storage, consistency and speed are critical, especially when dealing with small optimized row columnar (ORC) files. A premium block blob storage account when used with Data Lake Storage Gen2, has repeatedly demonstrated a 3X performance improvement over a standard general-purpose v2 account in this scenario. Queries executed fast enough to feel local to the compute machine.
In another case, a partner stores and queries logs that are generated from their security solution. The logs are generated by using Databricks, and then and stored in a Data Lake Storage Gen2 enabled premium block blob storage account. End users query and search this data by using Azure Data Explorer. They chose this type of account to increase stability and increase the performance of interactive queries. They also set the life cycle management
Delete Action policy to a few days, which helps to reduce costs. This policy prevents them from keeping the data forever. Instead, data is deleted once it is no longer needed.
Data processing pipelines
In almost every industry, there is a need for enterprises to process data. Raw data from multiple sources needs to be cleansed and processed so that it becomes useful for downstream consumption in tools such as data dashboards that help users make decisions.
While speed of processing is not always the top concern when processing data, some industries require it. For example, companies in the financial services industry often need to process data reliably and in the quickest way possible. To detect fraud, those companies must process inputs from various sources, identify risks to their customers, and take swift action.
In some cases, we've seen partners use multiple standard storage accounts to store data from various sources. Some of this data is then moved to a Data Lake Storage enabled premium block blob storage account where a data processing application frequently reads newly arriving data. Directory listing calls in this account were much faster and performed much more consistently than they would otherwise perform in a standard general-purpose v2 account. The speed and consistency offered by the account ensured that new data was always made available to downstream processing systems as quickly as possible. This helped them catch and act upon potential security risks promptly.
Internet of Things (IoT)
IoT has become a significant part of our daily lives. IoT is used to track car movements, control lights, and monitor our health. It also has industrial applications. For example, companies use IoT to enable their smart factory projects, improve agricultural output, and on oil rigs for predictive maintenance. Premium block blob storage accounts add significant value to these scenarios.
We have partners in the mining industry. They use a Data Lake Storage Gen2 enable premium block blob storage account along with HDInsight (Hbase) to ingest time series sensor data from multiple mining equipment types, with a very taxing load profile. Premium block blob storage has helped to satisfy their need for high sample rate ingestion. It's also cost effective, because premium block blob storage is cost optimized for workloads that perform a large number of write transactions, and this workload generates a large number of small write transactions (in the tens of thousands per second).
Machine Learning
In many cases, a lot of data has to be processed to train a machine learning model. To complete this processing, compute machines must run for a long time. Compared to storage costs, compute costs usually account for a much larger percentage of your bill, so reducing the amount of time that your compute machines run can lead to significant savings. The low latency that you get by using premium block blob storage can significantly reduce this time and your bill.
We have partners that deploy data processing pipelines to spark clusters where they run machine learning training and inference. They store spark tables (parquet files) and checkpoints to a premium block blob storage account. Spark checkpoints can create a huge number of nested files and folders. Their directory listing operations are fast because they combined the low latency of a premium block blob storage account with the hierarchical data structure made available with Data Lake Storage Gen2.
We also have partners in the semiconductor industry with use cases that intersect IoT and machine learning. IoT devices attached to machines in the manufacturing plant take images of semiconductor wafers and send those to their account. Using deep learning inference, the system can inform the on-premises machines if there is an issue with the production and if an action needs to be taken. They mush be able to load and process images quickly and reliably. Using Data Lake Storage Gen2 enabled premium block blob storage account helps to make this possible.
Real-time streaming analytics
To support interactive analytics in near real time, a system must ingest and process large amounts of data, and then make that data available to downstream systems. Using a Data Lake Storage Gen2 enabled premium block blob storage account is perfect for these types of scenarios.
Companies in the media and entertainment industry can generate a large number of logs and telemetry data in a short amount of time as they broadcast an event. Some of our partners rely on multiple content delivery network (CDN) partners for streaming. They must make near real-time decisions about which CDN partners to allocate traffic to. Therefore, data needs to be available for querying a few seconds after it is ingested. To facilitate this quick decision making, they use data stored within premium block blob storage, and process that data in Azure Data Explorer (ADX). All of the telemetry that is uploaded to storage is transformed in ADX, where it can be stored in a familiar format that operators and executives can query quickly and reliably.
Data is uploaded into multiple premium performance Blob Storage accounts. Each account is connected to an Event Grid and Event Hub resource. ADX retrieves the data from Blob Storage, performs any required transformations to normalize the data (For example: decompressing zip files or converting from JSON to CSV). Then, the data is made available for query through ADX and dashboards displayed in Grafana. Grafana dashboards are used by operators, executives, and other users. The customer retains their original logs in premium performance storage, or they copy them to a general-purpose v2 storage account where they can be stored in the hot or cool access tier for long-term retention and future analysis.
Getting started with premium
First, check to make sure your favorite Blob Storage features are compatible with premium block blob storage accounts, then create the account.
Note
You can't convert an existing standard general-purpose v2 storage account to a premium block blob storage account. To migrate to a premium block blob storage account, you must create a premium block blob storage account, and migrate the data to the new account.
Check for Blob Storage feature compatibility.
Create a new Storage account
To create a premium block blob storage account, make sure to choose the Premium performance option and the Block blobs account type as you create the account.
Note.
If your storage account is going to be used for analytics, we highly recommend that you use Azure Data Lake Storage Gen2 along with a premium block blob storage account. To unlock Azure Data Lake Storage Gen2 capabilities, enable the Hierarchical namespace setting in the Advanced tab of the Create storage account page.
The following image shows this setting in the Create storage account page.
For complete guidance, see Create a storage account account.
See also
Feedback
Submit and view feedback for
|
https://docs.microsoft.com/en-in/azure/storage/blobs/storage-blob-block-blob-premium
|
CC-MAIN-2022-27
|
refinedweb
| 2,633
| 51.99
|
randyn0611
16-11-2017
We make use of the xtype tags field on a page dialog and the users are able to add new tags to the default namespace via the TagInputField. Is it possible to disable creation of new tags in that widget and make it select only? Or will it be necessary to listen for an event to stop it? I've read through the documentation on the widget and nothing jumped out at me. Any pointers would be greatly appreciated!
Thanks,
Randy Nolen
OK - I added a handler for the 'addtag' event and that seems to be the right place for what I want to do, but how do I interrupt the event flow? So far I've not found a way to stop the add from happening.
Randy
|
https://experienceleaguecommunities.adobe.com/t5/adobe-experience-manager/restrict-user-from-adding-new-tags-via-taginputfield/td-p/251720
|
CC-MAIN-2021-04
|
refinedweb
| 132
| 78.28
|
The monologues of the player add a lot of charm, really clever adventure game, love the planet based mechanics.
RoboNutria
Game
Recent community posts
Very good concept to play around, enjoyed it playing on my phone. Touch controls work fine, the tutorial really helps a lot. The game requires some patience, but it's really good :D
Love the simple and stylish retro pixel art, feels futuristic. Gameplay is very entertaining, gotta watch each steps. Really like the alien sprites.
Simple, addicting and tough, scoring 5 was a challenge for me :P
Played on my phone, for some reason the desktop.jar doesn't get past the grayt apps logo.
I'll have to check that stealth system you made with Ashley, looks like a lot of work was done in there!
Very polished game, died at level 2. The gameplay is very smooth, simple turn based is quite fun. Excellent work with the sounds and VA.
I just uploaded RC3 with an important bug fix that also made the last stage almost unplayable
Thanks a lot for trying the game and the feedback.
@kyperbelt, I also fixed a bug related to collision handling, that should be fixed now, again, thanks for the feedback :)
Tell me it's a platypus spaceship! I cannot unsee that.
Looks good, enjoyed watching the earth dramatically explode after crashing into a planet. Maybe it's a little slow paced but it's fair enough
Buen juego, saludos :)
Really fun puzzle game and easy to get into. Gave me some puyo puyo vibe :P
Perfectly played on the browser.
As @Darren mentioned, the desktop jar lacks the manifest: "no main manifest attribute, in desktop-1.0.jar", maybe you can build using provided gradle: gradlew desktop:dist
On android plays and looks great, motion controls make it feel really natural. Can't dislike pong :v
Good simple concept, neat animations.
Played on an old samsung galaxy perfectly, kind of hard to get the controls with touch but it adds to the fun!
Hope you guys continue on it :)
Excellent game! Feels really polished, and very ambicious. Sort of hard to get into at first, but it really motivates to continue playing since it doesn't take that much to restart.
UI is clean and not hard at all to understand. The audio fits the atmosphere incredibly. Achievements and statistics are very helpful to see what you can do.
Thanks for all the comments :)
@kyperbelt: Thanks a lot for the report, I will check that out, though the code is very bare bones in many places and it's probably asking for a brutal refactoring hah
@mmachida: That's a great idea yes, we have a lot in our minds to do with this, but after playing by ourselves we realized it the potential of making it very challeng based, online would be awesome yes!
Cursor keys and playing tips added!
@devilbuddy: Thanks for the comment.
@apricotjam: Wow that's great to hear :D. I agree with you I don't think it's unfair, if it had more easier levels to have a decent learning curve for the player mechanics it wouldn't alienate lots of players..
We will make a full game out of this idea and mechanics after the jam is over, so serioulys I'm very happy for all the feedback. I will continue to check all your entries ;)
Had to go easy mode to get past level 4 with the spawning red guys. Good mix of enemy patterns.
Upgrades are great, but wish I could have upgraded the ship speed more lol. The data logger was a nice feature.
Nice shmup entry :)
Never mess with a space brocoli.
The most fearsome ingredient to fetch for your space salad.
Good entry :)
Feels very classy, was a little weird to play with mouse for me, but love the pixel art and gameplay is very clean.
Good gaem :)
Very clever and entertaining mechanics. Love the awesome effects, runs perfectly on my old laptop.
Great game :D
This is really fun to play and looks awesome, I'm gonna have to check how you did that screen effect.
Nice use of lights, really makes you feel the atmosphere of being stranded. Good use of physics, tried to escape with a ship stucked on my wing hah, good entry :)
I added some playing tips on the game page, hope it helps :)...
We really appreciate the feedback! It is very valuable.
Guys. play with the lights off!
Creepy, excellent atmosphere was accomplished here, lightning does a great work, and that thing really scared me :P
Saludos de Uruguay!
Loved the characters and the "voice acting" haha good to see some humor
Clever use of the perspective camera. Was able to finish after getting killed a lot by stupid things, but nice twist.
Great submission :)
Wow haha! Thanks for the comments guys. The audio department couldn't get in time, l got confused with the deadline and submitted 9 minutes, 5 seconds before the deadline :P
I know what you did there mmachida, keep it a secret haha. Will play your game tomorrow with time!
That sounds very troublesome way to fix it... Have you tried using alpha transparency instead of a color? It has been working fine for me that way.
@Ben
Are you using Tiled maps? This is common if you're using a spritesheet that has no padding between each tile. If you modify your image to add a 1px padding between each tile and import it on tiled as such (telling it has a 1px pad) it should display fine.
In case you already made some tiles and don't want to manually rearrange it, I've used this gimp plugin in the past to automatically add padding (or do other modifications) to spritesheets:
Glad to see people using LibGDX :D
It's quite simple actually, you need to take advantage of the Viewports, more specifically FitViewport.
Documentation:
Let me show you some snippet for a Screen class which will always maintain the 64x64 aspect:
public class LowResTest implements Screen {
private Color color = Color.BLACK;
private Viewport gameView;
private SpriteBatch batch;
private Texture texture;
@Override
public void show() {
gameView = new FitViewport(64, 64); // This is the internal resolution your game will display regardless of window size
batch = new SpriteBatch();
texture = new Texture(Gdx.files.internal("badlogic.jpg"));
}
@Override
public void render(float delta) {
Gdx.gl.glClearColor(color.r, color.g, color.b, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
gameView.apply(); // we set this as the current viewport (we could have more viewports, for example a hudView)
OrthographicCamera cam = (OrthographicCamera) gameView.getCamera();
batch.setProjectionMatrix(cam.combined); // Don't forget this else your viewport won't be used when rendering
batch.begin();
batch.draw(texture, 0, 0);
batch.end();
}
@Override
public void resize(int width, int height) {
gameView.update(width, height); // also don't forget this
} ...
Hope this helps!
And good jamming to all! I still have to spend some day to brainstorm ideas :P
Days 7-8.5
Intense last weekend tweaking, programming the missing features, bugs, box2d collision going bananas, etc. I really would love to document more but bed is a priority now. However we finally were able to close the cycle and say: "this is the game"
HUD:
This lacking important thing was missing but was added thanks to Marccio
Enemies:
Tweaking box2d values, changing colliders to be CircleShapes to get away from stucking, and adding a "Stomping goomba" type of enemy :P
Title and Level selection screen:
The title screen is very basic, however the level selection was very important. Unlocking levels when you finish them, a par time and your best time when finishing it. Also Level data saved to json! So it can keep the unlocking progress and the time scores
Gameplay life cycle:
The player has 3 lives, if he gets touched by any enemy he loses 1, however if he falls on lava or acid it's game over (which means level restart). The level has to be finished by collecting all the chibi ice cream and getting back to the ship:
Firing cones:
Almost forgot! The overlord can fire some happy cones that will make the enemies fall back by disabling it's linear velocity and applying a force to center.
Left out due to time:
Out of what I planned doing, only 2 out of 10 or more type of enemies were implemented, and no enemies firing at you, etc.
No cocky effects, particles, expensive animations or backgrounds.
Luckily the player mechanics and the levels we're designed is enough to call the thing quite a fun experience, which I like very much :P
Music and sound will come in a few hours...
Some stuff I learned:
Interactive
games like side-scrollers can get tough when using box2d if one is not very used to it and wants to tweak it a lot to have the desired experience. Issues like tunneling, enemies not colliding with bumpers to turn around, etc. Our CollisionHandler class turned into some spaghetti monster at this point.
In the end it was an excellent experience for an amateur like me in gamedev and libgdx, despite all the hard work it took (and taking) it's really awesome to see your game working. Now only remains to design a bunch of extra levels.
Good luck to all those who didn't finish their games yet!
Days 3-6
After taking some days to chill on January and then getting bombarded at work, I got back into jamming this week.
Art style:
Considering that time was precious, we decided to go back to a simple style of 8x8 sprites and tiles, even with 0 experience on doing this, Marccio made all the sprites for the ice cream overlord, the chibi ice creams (the ones you have to rescue) and some slug enemy. And I made very crappy tiles :P
Load game entities from Tiled:
I already had the box2d collision data being parsed, but now all the game entities are defined on a layer and are loaded at runtime for the corresponding Level and created in the game: Player, enemies, chibi ice creams.
Adding some behavior to enemies (AI):
I took a while thinking about this, but I decided to make something easy to escalate for the enemy behavior, be it walking, firing at the player, etc.
It works by making the Enemy entity having an Array of AiBehavior implementations, which are supposed to do different things with the enemy, so it's easy to add or remove them.
For now only a WalkBehavior is implemented.
Rescuing the ice creams:
Important mechanic that defines if you finish a level, these blue guys are happily waiting there for you to just get them, make some happy face then leave disappear. This was done by marccio
Misc:
Improved the mechanic for the player to feel more intuitive for the player and agile, thanks to Marccio testing the game.
Box2D collision filtering was added by using categories and masks, kinda messy but got it working fine at least.
Added acid/lava pits that will instantly kill the player (respawn and restart the level).
Ice Cream Overlord - Github
Day 2 - Wall climbing
A mechanic I decided to add is to simply do wall climbing sort of megaman x style but while also being affected by the gravity and falling velocity of your ice cream mass.
I'm going to briefly explain how I did it.
Sticking to walls:
Added 2 fixture sensors to the sides of the player, actually only 1 is "active" at the time depending if he's facing left or right (I simply destroy and create the fixture). When the thing is against a wall and the player is not on ground you can stick by pressing towards the wall. When the conditions are met a force is applied against the wall making the player stuck but is also affected by gravity and it's velocity, so if you're falling fast you won't be able to hang on it.
Wall jumping:
Another force that I apply to bounce off the wall and a little upward, so while on air you can maneuver. Had to refactor input logic to use an adapter, less spagghetti, less mindfuck...
Misc:
Added the sprite that marccio made (thank you!) with sliding and idle animation, probably not final yet.
Next I'll be sketching some enemies to use, lots of things to do in order to cook this thing up, but it's nice to be learning while doing it :)
Github - Ice Cream Overlord
Day 1 - Into coding.
Spent the afternoon pulling together the basics to have the player moving and jumping around a level.
While toying with physics I decided to make the player slide instead of walk/run, since it's an ice cream :D
Creating levels:
Using Tiled map editor and libgdx-utils (by dermetfan) to parse box2d data into the game so I can easilly edit collisions or any type of body/fixture within tiled.
Game entities:
Extending Box2DSprite and using a simple interface to update and to communicate collisions.
Collisions:
Thank you box2d contact listener
Player stuff:
For jumping im using a footer fixture as a trigger to detect if the player is grounded, got away form ray casting this time..
Decided to add some states so it will be easier to handle animations and other stuff in the future.
Wasted some time creating a HUD class, had to learn about scene2d stuff, damn tables :P
All is going as planned, soon my mate will get into the artistic side and I'm also tempted to try Spriter.
Linking the repo: Github - Ice Cream Overlord
Happy jamming to you all! And merry Christmas! (or intense coding night while totally drunk)
Looking great man. And yeah if this is your first time, Box2D is not in the top 5 friendliest things to learn I still hit my head with it quite a lot, but it sure helps a lot.
A bit late to the party, but not too late hopefully!
I've been brainstorming for some hours (even getting distracted at work) I ended up with a logical and realistic concept.
So let's begin with the story...
After a few thousand years the Earth faced extinction due to a global ice age that ended up freezing the entire planet, suddenly a new life form that had been dormant the whole time awakens and uses the remaining ships and technology to escape towards space and find warmer planets (but not too warm) to live in, the ice cream people...
Not much time goes by until aliens from other galaxies find about this so called "ice cream" and how tasty they are, so they decide to invade and have them under control. Luckily, they were not alone in space and were not an unusual life form.
Lords from a far away galaxy hear about the situation and the Overlord of the whole ice cream people decides to go on a mission to rescue and conquer back the homes of their friends.
What about the gameplay?
This will be a 2D platformer mix of a shooting action but more puzzle oriented. The Overlord of ice cream has his arsenal of cones to his disposal, however these cones are not as powerful as bullets and can't kill enemies, they will be used to push them away, making them fall on pits, and even defending against enemy bullets. The player will have to rescue a certain amount of ice cream people to win and proceed to the next level.
So many ideas I could be adding... but hey! it's a jam, can't be too ambicious and I already feel this is kinda crazy!
I will be working to bake all that is needed and if time is with me I'll see what it ends up being.
Follow updates on twitter if you're interested:
I'll be updating with in-game stuff here.
Hope some of you guys did read lol, I'll be checking other's dev logs ;)
Good jamming to you all!
A very raw sketch of mine with the player, hopefuly I will get help with the art :P
Hi fellow developers, artists, and users of LibGDX.
I'm a software engineer from Uruguay who started getting into gamedev as a hobby not so long ago. Libgdx has been my main tool for learning and practicing game development as I find it quite fun and easy to use.
I probably intend to do something 2D as I've yet to step into the additional axis and well... I just like retro stuff from the nes and snes era. About tools within libgdx: Box2D, dermetfan's libgdx-utils, Tiled map, etc, are ones I'm used to use.
I'm participating with my teammate who has better dominion over art creation than I do, however we don't have fixed roles, I love doing ugly bizarre sketches sometimes. We're in for the fun!
Nice meeting ya all! Let's create some crazy stuff.
|
https://itch.io/profile/robonutria
|
CC-MAIN-2018-09
|
refinedweb
| 2,862
| 69.52
|
define
MyBetterMap, a better implementation of the Map interface
than MyLinearMap, and introduce
hashing, which makes MyBetterMap more efficient.
MyBetterMap
Map
MyLinearMap
To improve the performance of MyLinearMap, we’ll write a new
class, called MyBetterMap, that contains a collection of
MyLinearMap objects. It divides the keys among the embedded
maps, so the number of entries in each map is smaller, which speeds up
findEntry and the methods that depend on it.
findEntry
Here’s the beginning of the class definition:
public class MyBetterMap<K, V> implements Map<K, V> {
protected List<MyLinearMap<K, V>> maps;
public MyBetterMap(int k) {
makeMaps(k);
}
protected void makeMaps(int k) {
maps = new ArrayList<MyLinearMap<K, V>>(k);
for (int i=0; i<k; i++) {
maps.add(new MyLinearMap<K, V>());
}
}
}
The instance variable, maps, is a collection of
MyLinearMap objects. The constructor takes a parameter,
k, that determines how many maps to use, at least initially.
Then makeMaps creates the embedded maps and stores them in an
ArrayList.
maps
k
makeMaps
ArrayList
Now, the key to making this work is that we need some way to look at a
key and decide which of the embedded maps it should go into. When we
put a new key, we choose one of the maps; when we get
the same key, we have to remember where we put it.
put
get
One possibility is to choose one of the sub-maps at random and keep
track of where we put each key. But how should we keep track? It might
seem like we could use a Map to look up the key and find the
right sub-map, but the whole point of the exercise is to write an
efficient implementation of a Map. We can’t assume we already
have one.
A better approach is to use a hash function, which takes an
Object, any Object, and returns an integer called a
hash code. Importantly, if it sees the same Object
more than once, it always returns the same hash code. That way, if we
use the hash code to store a key, we’ll get the same hash code when we
look it up.
Object
In Java, every Object provides a method called
hashCode that computes a hash function. The implementation of
this method is different for different objects; we’ll see an example
soon.
hashCode
Here’s a helper method that chooses the right sub-map for a
given key:
protected MyLinearMap<K, V> chooseMap(Object key) {
int index = 0;
if (key != null) {
index = Math.abs(key.hashCode()) % maps.size();
}
return maps.get(index);
}
If key is null, we choose the sub-map with index 0,
arbitrarily. Otherwise we use hashCode to get an integer,
apply Math.abs to make sure it is non-negative,
then use the remainder operator, \%, which guarantees that the
result is between 0 and maps.size()-1. So index is
always a valid index into maps. Then chooseMap returns
a reference to the map it chose.
key
null
Math.abs
\%
maps.size()-1
index
chooseMap
We use chooseMap in both put and get, so when
we look up a key, we get the same map we chose when we added the key. At
least, we should — I’ll explain a little later why this might not
work.
Here’s my implementation of put and get:
public V put(K key, V value) {
MyLinearMap<K, V> map = chooseMap(key);
return map.put(key, value);
}
public V get(Object key) {
MyLinearMap<K, V> map = chooseMap(key);
return map.get(key);
}
Pretty simple, right? In both methods, we use chooseMap to find
the right sub-map and then invoke a method on the sub-map.
That’s how it works; now let’s think about performance.
If there are n entries split up among k sub-maps,
there will be n/k entries per map, on average. When we look up
a key, we have to compute its hash code, which takes some time, then we
search the corresponding sub-map.
Because the entry lists in
MyBetterMap are k times shorter than the entry list in
MyLinearMap, we expect the search to be k times
faster. But the run time is still proportional to n, so
MyBetterMap is still linear. In the next exercise, you’ll see how we
can fix that.
The fundamental requirement for a hash function is that the same object
should produce the same hash code every time. For immutable objects,
that’s relatively easy. For objects with mutable state, we have to think
harder.
As an example of an immutable object, I’ll define a class called
SillyString that encapsulates a String:
SillyString
String
public class SillyString {
private final String innerString;
public SillyString(String innerString) {
this.innerString = innerString;
}
public String toString() {
return innerString;
}
This class is not very useful, which is why it’s called
SillyString, but I’ll use it to show how a class can define
its own hash function:
@Override
public boolean equals(Object other) {
return this.toString().equals(other.toString());
}
@Override
public int hashCode() {
int total = 0;
for (int i=0; i<innerString.length(); i++) {
total += innerString.charAt(i);
}
return total;
}
Notice that SillyString overrides both equals and
hashCode. This is important. In order to work properly,
equals has to be consistent with hashCode, which means
that if two objects are considered equal — that is, equals
returns true — they should have the same hash code. But this
requirement only works one way; if two objects have the same hash code,
they don’t necessarily have to be equal.
equals
true
equals works by invoking toString, which returns
innerString. So two SillyString objects are equal if
their innerString instance variables are equal.
toString
innerString
hashCode works by iterating through the characters in the
String and adding them up. When you add a character to an int,
Java converts the character to an integer using its Unicode code point.
You don’t need to know anything about Unicode to understand this
example, but if you are curious, you can read more at.
int
This hash function satisfies the requirement: if two
SillyString objects contain embedded strings that are equal,
they will get the same hash code.
This works correctly, but it might not yield good performance,
because it returns the same hash code for many different strings. If two
strings contain the same letters in any order, they will have the same
hash code. And even if they don’t contain the same letters, they might
yield the same total, like "ac" and "bb".
"ac"
"bb"
If many objects have the same hash code, they end up in the same
sub-map. If some sub-maps have more entries than others, the speedup
when we have k maps might be much less than k. So one of the goals
of a hash function is to be uniform; that is, it should be equally
likely to produce any value in the range. You can read more about
designing good hash functions at.
Strings are immutable, and SillyString is also immutable
because innerString is declared to be final. Once you
create a SillyString, you can’t make innerString refer
to a different String, and you can’t modify the String it
refers to. Therefore, it will always have the same hash code.
final
But let’s see what happens with a mutable object. Here’s a definition
for SillyArray, which is identical to SillyString,
except that it uses an array of characters instead of a String:
SillyArray
public class SillyArray {
private final char[] array;
public SillyArray(char[] array) {
this.array = array;
}
public String toString() {
return Arrays.toString(array);
}
@Override
public boolean equals(Object other) {
return this.toString().equals(other.toString());
}
@Override
public int hashCode() {
int total = 0;
for (int i=0; i<array.length; i++) {
total += array[i];
}
System.out.println(total);
return total;
}
SillyArray also provides setChar, which makes it
possible to modify the characters in the array:
setChar
public void setChar(int i, char c) {
this.array[i] = c;
}
Now suppose we create a SillyArray and add it to a map:
SillyArray array1 = new SillyArray("Word1".toCharArray());
map.put(array1, 1);
The hash code for this array is 461. Now if we modify the contents of
the array and then try to look it up, like this:
array1.setChar(0, 'C');
Integer value = map.get(array1);
the hash code after the mutation is 441. With a different hash code,
there’s a good chance we’ll go looking in the wrong sub-map. In that
case, we won’t find the key, even though it is in the map. And that’s
bad.
In general, it is dangerous to use mutable objects as keys in data
structures that use hashing, which includes MyBetterMap and
HashMap. If you can guarantee that the keys won’t be modified
while they are in the map, or that any changes won’t affect the hash
code, it might be OK. But it is probably a good idea to avoid it.
HashMap
In this exercise, you will finish off the implementation of
MyBetterMap. In the repository for this book,
you’ll find the source files for this exercise:
MyLinearMap.java
MyBetterMap.java
MyHashMap.java
MyLinearMapTest.java
MyBetterMapTest.java
MyHashMapTest.java
MyHashMap
Profiler.java
ProfileMapPut.java
Map.put
As usual, you should run ant build to compile the source
files. Then run ant MyBetterMapTest. Several tests should fail,
because you have some work to do!
ant build
ant MyBetterMapTest
Review the implementation of put and get from the
previous chapter. Then fill in the body of containsKey. HINT:
use chooseMap. Run ant MyBetterMapTest again and confirm
that testContainsKey passes.
containsKey
testContainsKey
Fill in the body of containsValue. HINT: don’t use
chooseMap. Run ant MyBetterMapTest again and confirm
that testContainsValue passes. Notice that we have to do more
work to find a value than to find a key.
containsValue
testContainsValue
Like put and get, this implementation of
containsKey is linear, because it has to search one of the
embedded sub-maps. In the next chapter, we’ll see how we can
improve this implementation even more.
Think Data Structures
Think DSP
Think Java
Think Bayes
Think Python 2e
Think Stats 2e
Think Complexity
|
http://greenteapress.com/thinkdast/html/thinkdast011.html
|
CC-MAIN-2017-43
|
refinedweb
| 1,704
| 72.56
|
Spring Integration provides support for Syndication via Feed Adapters
As we know Web syndication is a form of syndication where material such as news items, press releases that is available to any website is also made available via we feeds such as RSS, ATOM etc.
Spring integration provides support for Web Syndication via FEED adapter which comes with a convenient namespace-based configuration. To configure FEED namespace include the following elements into the headers of your XML configuration file:
xmlns:int-feed="" xsi:schemaLocation=""
The only adapter that is really needed to provide support for retrieving feeds is an inbound channel adapter which allows you to subscribe to a particular URL. Below is the configuration for such adapter:
<int-feed:inbound-channel-adapter <int:poller </int-feed:inbound-channel-adapter>
In the above configuration we are subscribing to a URL identified by
url attribute.
As news items are retrieved they will be converted to a Message and sent to a channel identified by
channel attribute.
The payload of such message will be
com.sun.syndication.feed.synd.SyndEntry which encapsulates
various data (i.e., content, dates, authors etc.) about a news item.
You can also see that Inbound Feed Channel Adapter is a Polling consumer which means you have to
provide a poller configuration. However, one important thing you must understand with regard to Feed sinc its inner-workings
are slightly different then any other poling consumer. When Inbound Feed adapter is started it does the first poll and
receives
com.sun.syndication.feed.synd.SyndEntryyFeed which is an object that contains multiple
SyndEntry objects. Each entry is stored in the local entry queue and is released based on
the value in the
max-messages-per-poll attribute where each Message will contain a single entry.
If during retrieval of the entries from the entry queue the queue had become empty the adapter will attempt to update
the Feed populating the queue with more entries (SyndEntry) if available, otherwise the next attempt to poll for a feed will
be determined by the trigger of the poller (e.g., every 10 seconds in the above configuration).
Duplicate Entries
Polling for a Feed might result in the entries that have already been processed ("I already read that news item, why are you showing it to me again?").
Spring Integration provides a convenient mechanism to eliminate the need to worry about duplicate entries.
Each feed entry will have publish date field. Every time the new Message is generated and sent,
Spring Integration will store the value of the publish date in the instance of the
org.springframework.integration.store.MetadataStore which is a strategy interface designed to store various
types of meta-data (e.g., publish date of the last feed entry that has been processed) to help components such as Feed to deal with
duplicates.
The default rule for locating this meta-data store is as follows;. This means that upon restart you may end up with
duplicate entries. If you need to persist meta-data between Application Context restarts, you may use
PropertiesPersistingMetadataStore which is"/>
|
http://docs.spring.io/spring-integration/docs/2.0.0.RC2/reference/html/feed.html
|
CC-MAIN-2015-11
|
refinedweb
| 514
| 52.09
|
In part one of ApprovalTests and MVC Views, Getting Started, I covered the basic mechanics of getting an MVC View under test. In part two, Working with Data, I showed how you can create seams in your MVC project that allow you to avoid your repository and inject test data into the View. These tests were more interesting that the tests in part one, and once you know where and how to create your seams for testing MVC views, you can really get into the groove covering your view with tests. In this article we’ll look at running our tests on a build server. Trying to run the tests on the build server will reveal one of the drawbacks of our dependency on the webserver. But, we’ll see how to overcome it.
To review, we started this example with File->New Project and our goal is to create a set of tests that run consistently under Visual Studio, NCrunch, and CC.NET. All of the source code is available as a mercurial repository on bitbucket.org. If you haven’t already watched it, take 15 minutes to check out Llewellyn Falco’s video: Using ApprovalTests in .Net 20 Asp.Mvc Views.
Automated View Testing
You can build upon the techniques described in the previous two articles to build tests appropriate for your project. And you toddle along happily for a while writing your tests and code, until you hit the next speed bump when you try to send your code up to the build server. After reading this article, you’ll know what to do to get your tests running on the server, and I hope it won’t seem like a speed bump to you at all. For now, lets just charge ahead like we don’t know any better and send our code to the build server.
I’m using CC.NET and other than the fact that I’m pretty sure its raising my electric bill my only complaint is that the configuration system is a little clunky. Maybe there’s better stuff out there but for now CC.NET is meeting my needs. I configured the build server to keep an eye on MVCTestSite’s repository on bitbucket.org and fetched the test results once it was done. All the tests passed except for the view tests. Here’s our error:
Test method MVCTestSite.Tests.Views.HomeViewsTest.TestAbout
Looks familiar. It’s the second gotcha all over again, the web server is not running. On the build server we don’t have the option of clicking “Start Without Debugging” in Visual Studio. Even if we could, we want our build server to be completely automated. If the build process requires us to do something before we can test, it’s broken.
Lucky for us, CassiniDev is a fantastic open source project well suited for solving this problem (emphasis original):
The.
Sounds like just the thing we need. CassiniDev can take the place of IIS Express or Cassini, but since we want to run this server from a continuous integration system, we want to use the self-hosting option. We’ll do this by taking a reference to CassiniDev in our test project, starting the server during test setup, and stopping the server during teardown. With CassiniDev to serve our pages, we should be able to get our tests working in all three build environments.
Install CassiniDev using NuGet.
Once we have a reference to CassiniDev, test classes that contain web requests need to extend the CassiniDevServer class. All of our view tests (some real and some imagined) already extend our own MvcTest base class to perform the PortFactory setup, so by extending CassiniDev server with MvcTest, we should be able to add CassiniDev support to all of our View tests at once.
Import the CassiniDev namespace and extend CassiniDevServer:
using CassiniDev; namespace MVCTestSite.Tests.Views { public class MvcTest : CassiniDevServer { public MvcTest() { PortFactory.MvcPort = 61586; } } }
Next, add setup and teardown to MvcTest. In MSTest these methods are identified by the TestInitalizeAttribute and the TestCleanupAttribute. If we try to use these attributes in a class not marked with the the TestClassAttribute, we’ll get a warning, so we’ll add that attribute as well. You can make your life a little easier by writing the teardown method first. Remember, NCrunch is watching you, and if you write the setup method first, NCrunch will execute it and start the server. I found that this ended up locking some files in the bin directory and I had to restart Visual Studio. So, write the cleanup first.
[TestCleanup] public void Cleanup() { StopServer(); }
Now write the setup:
[TestInitialize] public void Setup() { StartServer(@"..\..\..\MVCTestSite", PortFactory.MvcPort, "/", "localhost"); }
The minimum information needed by CassiniDev is the path to the web application folder. With that info in hand, it can find an open port to host the site on and start. Since we are using ApprovalTests, we can’t allow CassiniDev to pick its own port, since changing the expected port would cause the tests to fail in a variety of ways. First, VerifyMvcPage wont be hitting the right port, so we’ll get a 500 error. CassiniDev provides a method to generate paths with the correct port and we could use this method along with VerifyUrl to reach our MVC actions. However, this wont work for us because the port number is actually in the HTML we receive with ApprovalTests, so we need that port to remain consistent for comparison with the approved file.
CassiniDev allows us to choose the port using an overload to StartServer, so that’s what we do, and we pass in the port that we earlier assigned to the PortFactory. While we’re here, we should also change the port. This breaks our tests momentarily, but we don’t want to use the same port that IIS Express or Cassini are configured to use, we may have started one of those servers for debugging and CassiniDev cannot start if the port is already occupied.
Lets just pick the next port up:
public MvcTest() { PortFactory.MvcPort = 61587; }
Now CassiniDev can start and is able to serve our pages, our ApprovalTest fails as expected because we changed our port. File launcher reporter shows us some dramatic differences from what we have seen until now:
All of our styles appear to be missing. By the time that the browser loads the received file, CassiniDev has already shut down and is not available to serve the request of the CSS data. All that we see is raw HTML without any styles.
Although this isn’t the reason the test failed, it’s an important caveat when using CassiniDev and ApprovalTests together. If you are using ApprovalTests to get feedback while making changes, you should probably use a local server like IIS Express, Cassini, or even the WinForms variation of CassiniDev. This way you can see your changes with the styles applied and feel confident that your design is correct. Once you have approved your changes, and you’re using ApprovalTests to detect regression, you can switch back to the self-hosted variant of CassiniDev. You certainly need to switch back to self-hosted CassiniDev before sending your tests to the build server.
With that discussion in mind, lets switch to a DiffReporter and see what’s really changed in this file.
First, we notice that the port is different. That change was expected so we can just use the diff utility to move the new line over to the approved file. But we also see that the welcome message changed because I switched computers after approving this file the first time. The greeting is part of the default MVC template that isn’t really important to me, so I simply delete this block from the approved file:
<div id="logindisplay"> Welcome <strong>Starbuck\Jim Counts</strong>! </div>
I also need to visit the master page and delete the same block. Now, if you require a personalized greeting then it is actually possible to make a strongly typed master page with a model, and you could introduce seams to control that model. I’ve also read that its possible to pass this data in using the ViewData dictionary, but I haven’t tried it. Check out SO2821488 if you’re interested in going that route.
After updating the master page to match the new expectations in my approved file, I run the tests again, expecting them to pass. Index works now, but About is impacted by the change I made to the master page, so I need to reapprove its output. I run the tests again in Visual Studio and they all pass, but NCrunch still seems unhappy. Looking at the output from NCrunch I can see that it’s getting 500 errors when it tries to run the View tests. It turns out that this line is the culprit:
[TestInitialize] public void Setup() { StartServer(@"..\..\..\MVCTestSite", PortFactory.MvcPort, "/", "localhost"); }
This relative path to the web application folder is not valid from NCrunch’s workspace. Nasty annoyance, or valuable feedback? In my opinion, this is valuable feedback, NCrunch has uncovered an invalid assumption. Its possible that this assumption would also be invalid on the build server. Having it break in NCrunch gives me the opportunity to make changes now, while it’s a little more convenient because I don’t have to inspect the failure in the build server logs. The first time I encountered this problem I solved it using a brute force approach to locate the MVCTestSite folder, but I knew this had a bad code smell and when I showed it to Llewellyn, he immediately proposed a more elegant solution.
So let’s skip the brute force and look at the elegant solution. Here is what we would like our test setup to look like:
[TestInitialize] public void Setup() { StartServer(MvcApplication.Directory, PortFactory.MvcPort, "/", "localhost"); }
It would be nice if MvcApplication would just tell us where it is on the disk. We can make it do so by leveraging ApprovalUtilities once more. MvcApplication lives in Global.asax.cs, the “code behind’ for Global.asax. Go there import this namespace:
using ApprovalUtilities.Utilities;
Then declare a read-only property called Directory with this implementation:
public static string Directory { get { return PathUtilities.GetDirectoryForCaller(); } }
As soon as I finish implementing this property, NCrunch turns green. Looks like my tests are working again. I run them in Visual Studio and confirm.
Another Note for NCrunch Users
When using NCrunch and CassiniDev with a statically defined port, your tests may fail with an error like this:
Initialization method MVCTestSite.Tests.Views.HomeViewsTest.Setup threw exception. System.Exception: System.Exception: Port 61587 is in use..
This is caused by a race condition and will only happen if NCrunch and the Visual Studio test runner are executing the tests concurrently. Sometimes NCrunch wins the race and occupies the port, and other times Visual Studio wins the race. Once the winner finishes testing, you can run the loser again and the tests should pass.
For the time being this is just a minor inconvenience if you know what’s going on. Because our port number is part of the approved file, we can’t just use a compiler condition to give a different port to NCrunch, we have to let NCrunch try to use the same port. On the bright side, this is really only a problem on the developer’s workstation. NCrunch does not run on the build server, so the test agent running in that environment should be able to get exclusive access to the port whenever it runs.
Does It Work Now?
Yes, it works now on the build server without any further changes. When I commit the changes, the build server fetches the new code, and turns green. So now we have three green lights: Visual Studio, NCrunch, and CC.NET. We have a mix of tests on the Models, the Views, and the Controllers, some go through the web server and some are Plain-Old-Tests. When we want to test Views with fake data, we need to use seams inside the MVC project. When we want fully automated regression tests, we can use CassiniDev, and when we want to design with feedback, we can use a persistent server, but we have to start it ourselves.
If you’ve read this far, I congratulate you. This was a long read, but hopefully it gets you up to a plateau where you’re no longer confused by MvcApprovals and it all seems simple. If you do have any more questions leave a comment, or tweet with the hash-tag #ApprovalTests. Llewellyn watches that feed closely, and I’m usually lurking on it as well. Better yet, if you get it working, please consider helping other developers by writing about your experiences and sharing online.
Thanks for reading and good luck.
|
https://ihadthisideaonce.com/tag/cassinidev/
|
CC-MAIN-2019-09
|
refinedweb
| 2,156
| 70.63
|
I’ve been playing with the master branch of tracker and i’m loving it – it looks like its finally reached the stage where I won’t just turn it straight off after a fresh install.
It now brings GNOME an RDF store with a SPARQL interface. Powerful joo-joo, but kinda scary if you haven’t seen it before. Most conversations about it lead to words like graphs, triples, ontologies… My eyes start to gloss over.. I need to learn by doing. So i’ve been playing with writing some python wrappers to hide tracker and just provide a familiar pythonic interface.
Any object type that tracker knows about will be available in python via the wonders of introspection. All properties of the class are available, with docstrings explaining the intent of the property and its type. Obviously you can get, set and delete and do simple queries. And behind the scenes are SPARQL queries in all their glory. Theres a lot still to do, but enough done that I can synchronise my address book to Tracker with Conduit (see my tracker branch).
So far it looks something like this (but its subject to very rapid change):
import tralchemy
from tralchemy.nco import Contact
# add a contact
c = Contact.create()
c.fullname = "John Carr"
c.firstname = "John"
c.nickname = "Jc2k"
c.commit()
# find all the people called John
for c in Contact.get(firstname="John"):
print c.uri, c.fullname
def callback(subjects):
print subjects
Contact.notifications.connect("SubjectsAdded", callback)
# Will probably be just:
Contact.added.connect(callback)
While get() is a nice way to do simple queries, what if you wanted to do something a little more complicated. It always feels messy when you have SQL or SPARQL nested in other code. Existing SQL ORM tools are a great place to start at avoiding this, but i quite like the LINQ style generator-to-SPARQL. Something like:
q = Query(Contact.firstname for Contact in Store if Contact.nickname == 'Jc2k')
or
q = Query(c.firstname for c in Store if c is Contact and c.nickname == 'Jc2k')
Hmm decisions. Hope to implement similar abstractions in JavaScript, C# (via LINQ) and Vala (via Magic). Anyone wanna share their cloning tech?
|
http://blogs.gnome.org/johncarr/tag/tracker/
|
CC-MAIN-2014-15
|
refinedweb
| 370
| 76.93
|
Exponential Growth: The Power of Compounding
My first salary...
Many of us recall our first job rather fondly – primarily because of the power that the sense of financial independence gave us. We may have done different things with the first salary – maybe blown it all off celebrating. Or for those with a practical bent and the need for sustenance, we may have done the appropriate acts towards picking up a rental home or buying much needed clothes etc.
The rare few (and potentially those that are guided by a mentor in some form) set apart some amount from the first salary towards investment for the future.
While it is not essential that we focus on investment right from the first salary, the earlier in our working life that we plan for and begin executing a strategy towards investments, the more secure our financial future would be.
Compound interest is the eighth wonder of the world. He who understands it, earns it ... he who doesn't ... pays it.— Albert Einstein
Simple and compound interest
After quoting one of the most famous scientists of all time, I now briefly digress into high school mathematics. For those who are not mathematically inclined, please skip this sub-section with the understanding that compound interest provides better returns than simple interest via the premise of staying invested (ie) we do not take out the interest amount at the end of each period.
Firstly, the principle of simple interest. Considering a fixed principal amount (P), a fixed percentage of interest (R) and a fixed number of years (N), the interest that we gain on the principal is defined by the formula
I = PNR/100
As an example, with a principal of 1000, rate of interest 5%, over a period of 2 years the interest gained would be
I = 1000*5*2/100 = 100.
So at the end of two years our investment return would be 1100 (summing up the principal and interest).
Compound interest changes the above calculations by using the fact that at the end of year one, our principal can be increased to include the interest – this assumes that we are remaining fully invested. So if we calculate against the same example above, at the end of year 1 we have an interest value of
I1 = 1000*5*1/100 = 50
For the second year, we update the principal to include the interest so we are now operating off a base of 1000+50 and the next year’s interest can now been calculated
I2 = 1050*5*1/100 = 52.5
At the end of two years our investment return is now 1102.50
With very small amounts and periods we already see a benefit of 2.50 between simple and compound interest. As the number of years go up the difference in the benefit begins to change drastically.
To those who would like to recall the direct formula for compound interest it is
A = P(1+R/100)N where Ais the amount of return (principal + interest)
The exponential N is what brings in the compounded return with far reaching benefits. The best illustration of compounding returns comes from the fable of the rice and chess board (alternately wheat and chess board)
The fable of the grains and the chessboard
In ancient India, the game of chess was invented by Sessa and after having played many a mesmerizing game, his king was extremely pleased with the invention. The king asked Sessa to name any reward that he would like for this amazing invention. Sessa’s request was quite simple. He looked to be rewarded for each square on the chessboard:
- 1 grain of rice for the first square
- 2 grains of rice for the second square
- 4 grains of rice for the third square
and so forth until all 64 squares were paid for. Primarily the amount doubled for each square. There are versions which state that it was grains of wheat, but that is immaterial to the outcome.
Legend has it that the king was rather disappointed with this request and ordered his officers to immediately comply. When the royal treasury started computing the total number of grains required, it became evident that the humongous number could not be met with the assets of the kingdom. The total number of grains required is
18,446,744,073,709,551,615
which is well over a thousand times more than the global production of rice last year
There have been various studies into what would be the total volume of said number of grains, what would be the total weight etc. However, it is the principle that we should appreciate here. One way of applying this principle to our investments is that we assume our investment potential increases over time as our earnings increase, so we should double our investment amounts at fixed intervals.
If you are feeling all gung-ho about becoming a multi-millionaire and profess to double your investment amount every year, you may want to rein in because you may be setting yourself up for an insurmountable task, especially when you hit the second half of the chess board.
The second half of the chess board
Ray Kurzweil, American scientist and author is credited with coining the phrase “the second half of the chess board” in his 1999 book “The Age of Spiritual Machines: When Computers Exceed Human Intelligence” ISBN 0-670-88217-8.
When we analyze the doubling numbers, we find that so long as we are in the first half of the chessboard (until square 32) we are dealing with numbers that can be considered manageable. Square 33 in itself computes to a number that is larger than the total of the first half.
I have here an illustrated computation of the numbers computed on an excel spreadsheet (excel only gives exponentials for the last 12 due to the size of the numbers). It is clear that that the numbers in the second half are mind-boggling. Kurzweil’s use of this term was to relate to the point where an exponentially growing factor begins to have a significant economic impact on an organization's overall business strategy.
Coming back to the topic of doubling investment additions every year, obviously, looking at the numbers above, it will be infeasible after a certain point in time. However, if you succeed in doubling your investment additions even for a period of fifteen to twenty years, you are clearly on the path to become an immensely wealthy person over a period of time.
How do I apply this to personal finance?
We already discussed setting aside an investment amount from each salary – the first step is to do this, and to do this consistently over the majority of your salary earning period. There is no requirement to double this amount every year, you simply increase it appropriately, as your income increases. Who decides what is appropriate? It has to be you – if you feel rich enough you can hire a financial investor to help you with the analysis.
Is that all?
Obviously, the answer is NO.
Every individual has to pick up some financial know-how – you have to understand what asset classes are available, what is the growth potential in each asset class and the associated risks. Equity and debt are two classic asset classes, there are others like property, art, precious metals. The general rule of thumb advised is to focus on equity and debt for regular investment; plan the other esoteric investments at specific points in your life. The other generic advice is to have higher equity exposure earlier in your investment career and switching over to a higher debt profile later – primarily from a risk exposure perspective.
None of these are mandatory – each person picks up their investment philosophy over time. It is also fine that you build your investment philosophy over a period of time, just keep up the act of regular investment from early on and subsequently crystallize your philosophy.
One way to use the doubling technique in your investment is to figure out the number of years it takes for your investments to double – this number would vary as per the asset class. With this number, you can build a view of how many such doubling cycles are possible within your investing lifetime and correspondingly you may wish to alter the amount invested in that asset class.
The classic error in judgment that people tend to make is to dismiss all of the topics around investment as too complex and postpone it indefinitely. Our current consumption culture also supports this – the universe of things that we can spend on during our earning days is constantly expanding and inviting. That coupled with the peer-pressure of keeping up with the Joneses tends to kick any thought of a monthly set-aside for investment right out of the window.
Financial independence
There are those who seek to have a fruitful employment followed by a period of retirement focused on interests and hobbies – maybe even a second career, if feasible. If you live in a country where social security is significantly mature and the national government can be counted upon to provide for your retirement, you have the best scenario possible. There may still be some tweaks that you can do to improve your social security payments, you can figure out how to address these.
For the vast majority, personal investment is a mandatory supplement to social security and there is a requirement to build up an investment corpus through the employment period. True utopia would be having an investment corpus at retirement against which you can plan your withdrawals and determine that it is adequate for the rest of your lifetime.
© 2018 Saisree Subramanian
Popular
|
https://hubpages.com/money/Exponential-growth-The-power-of-compounding
|
CC-MAIN-2018-22
|
refinedweb
| 1,630
| 55.07
|
BasicDataStructures
When using JavaScript, the basic data structures do not behave as you expect. There are other data structures you can use, but they have syntactic nuances. You can read the MSDN page about these data structures but the basics are collected here.
Static Typing
Unity uses Javascript's data typing features to help it figure out the types of the variables you use. You are often not required to use the static typing features because the Unity engine will try to figure out what type you mean. When you use dynamic typing, sometimes the error messages are less clear than when you use static typing - so if you get an incomprehensible error message, try adding type definitions to your variables, especially loop variables - eg
for (var someVar: GameObject in anArray).
Variable Creation/Assignment
- Dynamic
someVar = "foo";
var someVar = "foo";
- Static
var someVar: VariableType;
var someVar: VariableType = "foo";
The two dynamic examples are more-or-less identical in browsers. However, Unity always interprets
someVar = "foo"; as being assignment to an already existing variable, and
var someVar = "foo"; as creating a variable in the current scope. This can mean that you're accidentally creating
someVar if you use the
var prefix when you mean to assign to an existing variable.
Inspector Tip: Creating "blank" variables in the global namespace (outside of any function) using the
var someVar: VariableType; syntax is necessary for it to show up in the Inspector.
Data Structures
Hash Tables
Hash tables are an important exception to all the suggestions above. You cannot initialize an empty hash-table using
var aHash: Hash; . You must use
var aHash = {}; to initialize a hash table.
Significant functions/methods of the hash table:
- Count
- An integer number of elements in the hash table.
- Keys
- A collection (list-like data structure) of the keys in the hash table.
- Contains(aKey)
- Is aKey a key in the hash table?
- Remove(aKey)
- Removes a key/value pair.
- Add(aKey, aValue)
- Equivalent to
aHash[aKey] = aValue;
ArrayLists
You might be tempted to use
new Array(); for your arrays. While this is understandable, the Array implementation has some significant limits, notably not supporting the
indexOf method. The ArrayList has many of the convenience methods one might want.
Warnings:
- You cannot create a null ArrayList using
var someArray: ArrayList;and subsequently access that ArrayList's members without first creating it. Workaround: in your
Awakefunction, do
var someArray = new ArrayList();
- You cannot access the ArrayList constructor -
var someArray = new ArrayList(foo, bar, baz);doesn't work. Workaround: You must create the ArrayList and subsequently populate it.
Significant functions/methods of the array list:
- Count
- An integer number of elements in the array. Use this instead of
length. e.g.
for (i=0; i < anArrayList.Count; i++) { ... }
- Contains(anObject)
- Is aMember an element in the hash table? e.g.
if (anArrayList.Contains(anObject)) { ... }.
- Remove(aMember)
- Removes aMember. Does not raise an error if aMember doesn't exist in the array.
- Add(anObject)
- Equivalent to
anArray[] = aMember;except that
anArray.Add(anObject)works.
|
http://wiki.unity3d.com/index.php?title=BasicDataStructures&direction=next&oldid=3563&printable=yes
|
CC-MAIN-2019-47
|
refinedweb
| 502
| 55.64
|
Folks, now that there's finally a decent (well, somewhat decent:-) Mac CVS client that supports ssh I'd like to move MacPython to sourceforge. There's two ways I can go about this: start a new MacPython project or merge the MacPython stuff into the main Python CVS repository. The Mac specific stuff for Python is all concentrated in a single subtree Mac of the main Python tree (the subtree has its own hierarchy of Python/Modules/Lib/etc directories), so putting it in the main repository should not pollute the filenamespace all that much. It would also have the advantage that a single "cvs update" would update everything (whereas the current situation for Mac developers, where Python/Mac is from a different CVSROOT than Python, does not have that advantage). The downside is that everyone who does a full checkout of the tree would get an extra 1000 or so files on their disk that are pretty useless unless they have a mac. Oh yes, another plus for putting stuff in the main repository is MacOSX support. Some MacPython modules have been "ported" to MacOSX, and I've started on adding them to setup.py, and life would become a lot simpler for people compiling on MacOSX if they had everything available automatically. -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++ | ++++ see ++++
|
http://mail.python.org/pipermail/python-dev/2001-May/014611.html
|
crawl-001
|
refinedweb
| 238
| 65.56
|
.
Hope this helps!!!
Babu
From testing, looks like select() only return if the another end is closed such as shutdown or application quit. It cannot detect situation such as network link broken. Am I right?
With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.
About send(), this is the question I have. I still can call send() without any error return ( I think it is buffer), and after a few mins, then my select() thread will report error. I would like to know whether this is detected by Microsoft Network or TCP. I worry about this because the application has to ve able to run in internet, which current developed and tested in Intranet.
Thanks.
And what is this Microsoft Network? I think the error is not detected by the MS Network. And what is the timeout value you have set in the select() statement? Set it to NULL.
First of all: recv checks if there is data in the driver buffer avaliable and returnes to the caller OK. So, network may be down (cable disconnected) and recv returnes OK - no data avaliable. Send returnes OK if data is placed in the local buffer. In short, if number of bytes written <> number of bytes sent in syncronous mode you know that connection is lost. If all was written OK, you know ONLY that data is in the buffer.
KEEP_ALIVE default timeouts are set extrimely high (in essance, disabling keep alive functionality). Again you can change it by edditing registry.
The situation is even worse: keep alive functionality is VERY inconsistent accross different platforms. The best implementation is in WINDOWS NT. Windows 98 failes. Linix also fails unpredictably.
The right solution is to add keep alive sygnals to your application level protocol.
There is one more way to guaranty message delivery (all documentation strongly discorage this usage).
Namely disabling buffering at the end of every send operation:
int BuffSize = 0;
setsockopt(m_socket, SOL_SOCKET, SO_SNDBUF, (char*)(&BuffSize), sizeof(BuffSize));
You start writing with normal Buffer size and write all bytes (but one) in one call to send. Then change the option and sent the last byte with no buffering. With this option send returnes after the round trip is done and data is in receiving copmuter buffer (NOTE: not read by receivind application, just in the buffer).
The connection speed in this case will be 1/10 of your bandwidth.
I agree that what you said, but I would like to "temporary" unlock this question first to see whether I cam receive more comments.
I know that application level protocol can solve this. Unfortunately, I am using a standard protocol (ITU-H225) and not all of the vendors repsonse to the network status reqeust.
Currently what I am doing (which I am not satisfy yet) is:
create a separate thread, which now and then try to open a TCP connection to the destination. If network broken, I will receive error from connect(). I don't like it because it will consume another connection.
The only thing I am confused is when I call send(), then after a while it may cause select() returned from error. I would like to know whether this is caused by Ethernet network or TCP/IP.
I will not try to use the buffer approach, becuase of performance reason.
Thanks.
|
https://www.experts-exchange.com/questions/20167329/A-question-on-TCP-SO-KEEPALIVE-WSAECONNRESET.html
|
CC-MAIN-2018-09
|
refinedweb
| 578
| 65.52
|
.
When working with PDFs, I've run across the MIME types
application/pdf and
application/x-pdf among others.
Is there a difference between these two types, and if so what is it? Is one preferred over the other?
I'm working on a web app which must deliver huge amounts of PDFs and I want to do it the correct way, if there is one.
The standard MIME type is
application/pdf. The assignment is defined in RFC 3778, The application/pdf Media Type, referenced from the MIME Media Types registry.
MIME types are controlled by a standards body, The Internet Assigned Numbers Authority (IANA). This is the same organization that manages the root name servers and the IP address space.
The use of
x-pdf predates the standardization of the MIME type for PDF. MIME types in the
x- namespace are considered experimental, just as those in the
vnd. namespace are considered vendor-specific.
x-pdf might be used for compatibility with old software.
|
http://boso.herokuapp.com/content-type
|
CC-MAIN-2017-26
|
refinedweb
| 166
| 66.74
|
andrej hocevar <ah@siol.net> [2002-12-20 19:26:16 -0200]: > But if I add sequences that already have a meaning (like "\C-s") > it's still the old value that's in effect. Besides, how do I > represent function keys? At one time an example .inputrc file in the bash package contained this template. Perhaps it would be useful to you. Bob # In xterm windows, make the arrow keys do the right thing. $if TERM=xterm "\e[A": previous-history "\e[B": next-history "\e[C": forward-char "\e[D": backward-char # Under Xterm in Bash, we bind local Function keys to do something # useful. $if Bash "\e[11~": "Function Key 1" "\e[12~": "Function Key 2" "\e[13~": "Function Key 3" "\e[14~": "Function Key 4" "\e[15~": "Function Key 5" # I know the following escape sequence numbers are 1 greater than # the function key. Don't ask me why, I didn't design the xterm # terminal. "\e[17~": "Function Key 6" "\e[18~": "Function Key 7" "\e[19~": "Function Key 8" "\e[20~": "Function Key 9" "\e[21~": "Function Key 10" $endif $endif
Attachment:
pgpIZHg3IGrF0.pgp
Description: PGP signature
|
https://lists.debian.org/debian-user/2002/12/msg04257.html
|
CC-MAIN-2018-17
|
refinedweb
| 193
| 66.23
|
Overview
The Python Panel Editor window lets you create, edit and delete PySide2 or PyQt5 interfaces that can be displayed in Python Panel panes. The editor also lets you manage the entries in the Python Panel interfaces menu as well as the entries in the Houdini pane tab menu.
Requirements
There are no requirements when building PySide2 interfaces. Houdini ships with PySide2 modules out-of-the-box.
To build interfaces with PyQt5, install the PyQt5 modules on your system. Houdini does its best to find the PyQt5 modules automatically however, if the modules cannot be found then add the installed location to the Python search path.
For example, if PyQt5 is installed at
/path/to/site-packages/PyQt5, then set
PYTHONPATH=/path/to/site-packages in your environment before starting Houdini. Alternatively, append
/path/to/site-packages to the search path through Python code like so:
# Modify search path import sys sys.path.append("/path/to/site-packages") # Now you can import PyQt5 from PyQt5 import QtWidgets
New to Houdini 17.0
Houdini 17.0 only supports Python Panels built with PySide2 or PyQt5. PySide and PyQt4 are no longer supported.
You can specify the placement of your interface in the pane tab type menu using the new PaneTabTypeMenu.xml file. See the Pane tab type menu section for details.
Creating and Editing Interfaces
Open the Python Panel Editor from the Windows menu or from the
toolbar button in a Python Panel pane tab.
To create a new interface definition, choose the Interfaces tab and click the New Interface button. The interface will be loaded in the editor and added to the menu list.
To edit an existing interface, select the interface from the drop down menu on the Interfaces tab.
Edit the name, label, icon using the interface editor.
Write Python code in the script text area which builds the interface. An
onCreateInterface()function must be defined which returns the root widget of your interface. The returned root widget is embedded in the Python Panel.
Save changes by pressing the Accept or Apply button.
Editing the Interface Menu
Deleting Interfaces
Open the Delete Interfaces dialog from the Python Panel Editor or from the
toolbar button in a Python Panel pane tab.
In the dialog, select the interfaces to delete from the list. Multiple interfaces can be selected by holding the ⌃ Ctrl key during selection.
Press the Delete button to the delete the selected entries. A confirmation dialog will display before entries are deleted.
Warning
The deletion process is irreversible since interface definitions are also deleted from disk. To remove entries from the interface menu without deleting the definitions, please refer to the Editing the Interface Menu section above.
Interfaces tab
The interfaces tab is used to create and edit Python interfaces. The drop down menu at the top can be used to select which interface to edit. New Interface creates a new interface and loads it into the editor. Delete Interface displays a dialog with a list of all interfaces that can be deleted.
Save To
The file path that the interface definition is saved to. The file must be writeable by Houdini for changes to be saved correctly. If the path is not writeable then an error message is displayed when attempting to apply changes to the interface. The file path can be typed manually or selected using the file browser on the right of the field.
Name
The internal name of the interface. This must be unique across all loaded Python interfaces. That is, at most one interface is loaded into for a given internal name. The name must start with a letter and can contain letters, numbers, and/or underscores.
Label
The human-readable name of the interface. The label is used in the Python Panel pane tab and in the menu interfaces menu. Multiple interfaces may share the same Label.
Icon
Internal name, file path, or URL of the icon to use for the interface.
Click the chooser button at the right end of the field to choose a file.
Note that you can choose a file contained in a digital asset (click
opdef: on the left side of the file chooser).
If you don’t supply an absolute path or URL, Houdini will look for the
icon using the path in the
$HOUDINI_UI_ICON_PATH environment variable.
You can use an SVG file or any image format Houdini supports (such
as PNG or
.pic). The icon image should be square.
Houdini ships with a number of stock SVG icons. You can see bitmap
representations of these icons in
$HFS/houdini/help/icons/large. To specify a
stock icon, use the form
dirname_filename, where…
dirname is the directory name under
$HFS/houdini/help/icons/large, such as
OBJ,
SHELF, or
MISC, and
filename is the icon’s filename minus any extension. For example,
OBJ_stickyspecifies the standard icon for the Sticky object.
Show Network Navigation Bar
When checked on, the pane tab containing the Python Panel interface will show the controls for navigating around the Houdini network.
Menu Hints
When no menu definition is loaded from disk then the interface menu hints are used to instruct Houdini on whether to include the interface in the menu and where to insert the interface into the menu.
Include in Toolbar Menu
Check this option to include the interface in the toolbar menu when no toolbar.
Include in Pane Tab Menu
Check this option to include the interface in the panetab menu when no panetab.
Note
If a menu definition has been loaded from disk then Houdini will ignore the interface’s menu hints. In this case a warning will appear below the menu hints.
Note
Menu hints for the pane tab type menu are also ignored if the interface has already been specified in the pane tab type menu XML file,
PaneTabTypeMenu.xml. See the Pane tab type menu section for details.
Script
The script definition for the Python interface. This is where PySide2 or
PyQt5 code is written to build the interface. When the interface is loaded
in a Python Panel, the script is executed and the root widget returned by
the
onCreateInterface() function is embedded into the panel.
Python Panels recognize and execute the following functions when certain events occur:
onCreateInterface()→
PySide2.QWidget or PyQt5.QWidget
Executed when the Python Panel creates the Qt interface. The function must return the root Qt widget of the interface.
onDestroyInterface()
Executed when the Python Panel is about to destroy the Qt interface. This can happen when the Python Panel pane tab is closed or if the interface is reloaded.
onActivateInterface()
Executed when the interface becomes active and visible.
onDeactivateInterface()
Executed when the interface becomes inactive and hidden.
onNodePathChanged(node)
Executed when Houdini has changed the current node. This function hook is useful for when your Python Panel interface is interested in following navigation around the Houdini node network.
nodeis the current node and is a hou.Node object. If there is no current node then
nodeis set to
None.
Note that this function hook is also called when the Python Panel interface is first loaded.
Only the
onCreateInterface() function is required by the interface.
The other functions are optional.
Note
In older Houdini versions, the
createInterface() function was required. This function is now deprecated and replaced by
onCreateInterface().
Note
The kwargs dictionary is available in the interface script. The dictionary contains the following entries:
paneTab - The pane tab (hou.PaneTab) that contains the interface. It is recommended that the result of
kwargs["paneTab"]not be stored in a persistent variable because the hou.PaneTab object may not be valid later on. For example, switching to another pane tab type and then back to the Python Panel can cause the old pane tab object to be deleted and a new one created. Instead, always call
kwargs["paneTab"]when you need to access the pane tab.
Here is an example of using kwargs in the script:
from PySide2 import QtWidgets def onCreateInterface(): panetab = kwargs["paneTab"] label = QtWidgets.QLabel() label.setText("Running in pane tab '%s'" % panetab.name()) return label
Menu tab
Pane tab type menu
Python Panel files
Python Panel menu and interface definitions are stored in
.pypanel files on disk.
When Houdini starts up, it searches for
.pypanel files in
$HFS/houdini/python_panels and then in
$HOUDINI_USER_PREF_DIR/python_panels by default and loads the definitions stored in those files.
If
$HOUDINI_PATH is set, then Houdini instead searches for files in the
python_panels subdirectory for each path listed in
$HOUDINI_PATH.
You can override the search path by setting
$HOUDINI_PYTHON_PANEL_PATH.
Note
Houdini loads
.pypanel files in the order of the directories specified by
$HOUDINI_PYTHON_PANEL_PATH and then in alphabetcial order by filename.
If multiple Python Panel interfaces with the same internal name are found on disk, then Houdini uses the last interface definition that it loaded.
Similarly, Houdini uses the last interfaces menu definition that it loaded.
|
http://www.sidefx.com/docs/houdini/ref/windows/pythonpaneleditor.html
|
CC-MAIN-2019-47
|
refinedweb
| 1,481
| 57.27
|
On Tue, 2002-08-20 15:30:22 +0200, Maciej W. Rozycki <macro@ds2.pg.gda.pl>
wrote in message <Pine.GSO.3.96.1020820152046.8700E-100000@delta.ds2.pg.gda.pl>:
> On Tue, 20 Aug 2002, Jan-Benedict Glaw
Actually, I had written all that using separate functions before, but
neither I nor Ralf liked this approach (because it adds hundreds LOC to
.../c-r4k.c). Ralf then suggested writing it using macros, so I did.
-*-,
static inline void r4k_flush_cache_all_s128d16i16(void)
static inline void r4k_flush_cache_all_s32d32i32(void)
static inline void r4k_flush_cache_all_s64d32i32(void)
..
could go away by: (there _will_ be bugs. 100% untested, and I'm a bad
preprocessor coder:-)
#define FUNC_R4K_FLUSH_CACHE_ALL(NAME, SC, DC, IC) \
static inline void \
r4k_flush_cache_all_##NAME(void) \
{ \
blast_dcache##DC(); \
blast_icache##IC(); \
blase_scache##SC(); \
}
and then writing:
FUNC_R4K_FLUSH_CACHE_ALL(s128d16i16, 128, 16, 16)
FUNC_R4K_FLUSH_CACHE_ALL(s32d32i32, 32, 32, 32)
FUNC_R4K_FLUSH_CACHE_ALL(s64d32i32, 64, 32, 32)
...
instead. The __save_and_cli()/__restore_flags() functions could be done
as well as all the remaining others. That would pollute namespace like
it does today, but the .c file will be 80% smaller or so:
$ wc -l c-r4k.c
2422 c-r4k.c
MfG, JBG
--
Jan-Benedict Glaw . jbglaw@lug-owl.de . +49-172-7608481
-- New APT-Proxy written in shell script --
pgpTeFo1c7kHO.pgp
Description: PGP signature
|
http://www.linux-mips.org/archives/linux-mips/2002-08/msg00135.html
|
CC-MAIN-2014-10
|
refinedweb
| 211
| 68.16
|
29 August 2012 23:59 [Source: ICIS news]
LONDON (ICIS)--European August acrylic acid (AA) and acrylate esters contract prices have increased by €30-50/tonne ($38-63/tonne) on higher feedstock propylene costs, sources said on Wednesday.
Despite healthy supply, subdued summer demand and market uncertainty amid global economic instability, prices rose on the back of the €120/tonne hike in the August propylene contract.
The August European propylene contract price settled at €1,055/tonne FD (free delivered) NWE (northwest ?xml:namespace>
August AA prices have been assessed at €1,800-1,840/tonne FD NWE, up by €40-50/tonne from the previous month. Spot is selling at €1,490-1,550/tonne FD NWE.
Methyl acrylate (methyl-A) prices for August are assessed at €1,780-1,840/tonne FD NWE, up by €30-40/tonne from July. Spot is steady at €1,650-1,690/tonne FD, amid limited trade because of summer holidays.
Ethyl acrylate (ethyl-A) August contract prices are at €1,850-1,890/tonne FD NWE, an increase of €40/tonne, while ethyl-A spot has moved up to €1,500-1,530/tonne FD.
Butyl acrylate (butyl-A) prices have moved up to €1,800-1,840/tonne FD NWE, a €40-50/tonne increase from July, while spot is trading at €1,490-1,520/tonne FD. Prices for 2-ethylhexyl acrylate (2-EHA) are assessed at €2,090-2,150/tonne, up by €40/tonne on the low end, and 2-EHA spot is at €1,740-1,810/tonne FD NWE, an increase of €10-40/tonne, following similar price increases in
AA and acrylate esters prices are expected to firm in the next few weeks, on continued feedstock hikes. September propylene contract negotiations are ongoing with increases of up to €160-180/tonne being targeted by sellers, while buyers are aiming for hikes closer to €100/tonne.
Demand was slow during August, as is usual during the European summer vacation period. Although the market had initially expected rollovers or slight decreases, producers said there was a need to push through the propylene increase once it had been agreed.
“We cannot absorb the increases,” one producer said. “Downstream, there is a realisation that this [€120/tonne propylene hike] cannot be ignored.”
The spot market saw little activity during August, but demand has picked up as buyers are looking to secure material ahead of expected increases.
(
|
http://www.icis.com/Articles/2012/08/29/9590667/europe-august-aa-acrylate-esters-up-on-propylene.html
|
CC-MAIN-2015-06
|
refinedweb
| 408
| 50.06
|
Python Interview Questions & Answers PDF 2022. Here, you will come across some of the most frequently asked questions in Python job interviews in various fields.
python interview questions and answers
Advanced Level Python Interview Questions for Experienced and Professionals Like as a What is Python?, What are the key features of Python?, What are Keywords in Python?, What are functions in Python?, What is Pandas?, What are dataframes?, What is a Pandas Series?, What is Pandas groupby?, What are Literals in Python and explain about different Literals?, How can you concatenate two tuples?, How can you initialize a 5*5 numpy array with only zeroes?
These Python Developer interview questions will help you land in following job roles:
- Python Developer
- Research Analyst
- Software Engineer
- Data Scientist
- Data Analyst
- Machine learning engineer
i have all the classified them into the following sections like:
- Python Interview Questions for Freshers
- Python Interview Questions for Experienced
- Python OOPS Interview Questions
- Python Pandas Interview Questions
- Numpy Interview Questions
- Python Libraries Interview Questions
- Python Programming Examples
We will introduce you to the most frequently asked questions in Python interviews for the year 2022. Basic Level Python Interview Questions for Freshers and Beginners.
Question 1 : Is String in Python are immutable? (Yes/No)
Answer is Yes.
Question 2 : What is the difference between list and tuples in Python?
Question 3 : What are Keywords in Python?
There are following 33 keywords in python-
- And
- Or
- Not
- If
- Elif
- Else
- For
- While
- Break
- As
- Def
- Lambda
- Pass
- Return
- True
- False
- Try
- With
- Assert
- Class
- Continue
- Del
- Except
- Finally
- From
- Global
- Import
- In
- Is
- None
- Nonlocal
- Raise
- Yield
Question 4 : Is there any double data type in Python?
Answer is No.
Question 5 : What are the built-in types of python?
Built-in types in Python are as follows –
- Integers
- Floating-point
- Complex numbers
- Strings
- Boolean
- Built-in functions
Question 6 : Which programming Language is an implementation of Python programming language designed to run on Java Platform?
Jython – meanning of the Jython – (Jython is successor of Jpython.)
Question 7 : How do we execute Python?
here Python files first compile to bytecode. Then, the host executes them.
Question 8 : How is Python different from Java?
Following list is the comparison of Python vs Java –
Java is faster than Python
Java is platform-independent
Java has stronger database-access with JDBC
Java is verbose
Java is statically typed.
Java needs braces.
Python mandates indentation.
Python is dynamically-typed;
Python is simple and concise;
Python is interpreted
Question 9 : A canvas can have a foreground color? (Yes/No)
Answer is Yes.
Question 10 : Now, print this string five times in a row.
>>> for i in range(6): print(s)
Results:
Welcome To Pakainfo Welcome To Pakainfo Welcome To Pakainfo Welcome To Pakainfo Welcome To Pakainfo Welcome To Pakainfo
Question 11 : Is Python platform independent?
Answer is No.
Question 12 : Write code to print everything in the string except the spaces.
>>> for i in s: if i==' ': continue print(i,end='')
Result
WelcomeToPakainfo
Question 13 : Write code to print only upto the letter t.
>>> i=0 >>> while s[i]!='t': print(s[i],end=’’) i+=1
Question 14 : Do you think Python has a complier?
Answer is Yes.
Question 15 :What if you want to toggle case for a Python string?
I have the swapcase() method from the str class to do just that.
>>> 'Pakainfo'.swapcase()
Question 16 :How will you sort a list?
Sorts objects of list, use compare func if given.
list.sort([func])
Question 17 :How will you reverse a list?
Reverses objects of list in place.
list.reverse()
Question 18 :Explain Python List Comprehension.
The list comprehension in python is a way to declare a list in one line of code.
>>> [i for i in range(1,11,2)] //[1, 3, 5, 7, 9]
>>> [i*2 for i in range(1,11,2)] //[2, 6, 10, 14, 18]
Question 19 : How will you remove an object from a list?
Removes object obj from list.
list.remove(obj)
Question 20 : How do you calculate the length of a string?
>>> len('Welcome To Pakainfo')
Question 21 : What are membership operators?
With the operators ‘in’ and ‘not in’, i can confirm as well check if a value is a member in another.
>>> 'me' in 'disappointment' // retur true >>> 'us' not in 'disappointment' // retur true
Question 22 : Explain logical operators in Python.
I have Main 3 types of the logical operators- and, or, not.
Python and logical operators
>>> False and True //Return False
Python or logical operators
>>> 7<7 or True //Return True
Python not logical operators
>>> not 2==2 //Return False
Question 23 : How will you remove a duplicate element from a list?
i can turn it into a set to do that.
>>> list=[1,2,1,3,4,2] >>> set(list)
Question 24 : How will you convert a list into a string?
i will use the join() method for this.
>>> ranks=['single','second','third','fourth','fifth','sixth','seven'] >>> s=' '.join(ranks) >>> s
Question 25 : What is the Python interpreter prompt?
It is the special following sign for Python Interpreter:
>>>
If you have worked with the IDLE, you will see this prompt.
Question 26 : How will you check if all characters in a string are alphanumeric?
For this, i use the method isalnum().
When does a new block begin in python?
A block begins when the line is intended by 4 (Four) spaces.
Question 27 : Can True = False be possible in Python?
Answer is : No.
Question 28 : What is the difference between lists and tuples?
Question 29 : What are the applications of Python?
It is used in various software domains some application areas are given below.
Enterprise and business applications development
GUI based desktop applications
Games
Image processing and graphic design applications
Scientific and computational applications
Language development
Operating systems
Web and Internet Development
Question 30 : Can we preset Pythonpath?
Yes, we can preset Pythonpath as a Python installer.
Question 31 : What are the supported standard data types in Python?
Dictionary.
List.
Number.
Tuples.
String.
Question 32 : Write a function to give the sum of all the numbers in list?
Sample list − (200, 300, 800, 600, 0, 200) Expected output − 2100
Program for sum of all the numbers in list is −
def sum(numbers): total = 0 for num in numbers: total+=num print(''Sum of the numbers: '', total) sum((100, 200, 300, 400, 0, 500))
Question 33 : Python Interview Questions with Answers for freshers
I hope you get an idea about python interview questions.
I would like to have feedback on my infinityknow.com blog.
Your valuable feedback, question, or comments about this article are always welcome.
If you enjoyed and liked this post, don’t forget to share.
|
https://www.pakainfo.com/python-interview-questions/
|
CC-MAIN-2022-21
|
refinedweb
| 1,115
| 66.33
|
Introduction
SQL Server 2008 Reporting Services (SSRS 2008) features an on-demand report processing engine. This on-demand architecture has a number of key advantages over the processing engine design that existed in previous major releases. The most significant benefits are vast improvements to report engine scalability and performance (you can read a bit more about it here). Because of this fundamental change from previous versions, there are some specific design patterns that have changed. This post is a discussion of one scenario that, due to the new processing engine, requires a different design pattern in 2008 than was required in 2005 and 2000.
Special thanks to my esteemed colleagues Chris Baldwin and Chris Hays, who helped with the contents of this posting. Note: screenshots and how-to steps in this post are based on the currently available SQL Server 2008 RC0 release of Report Builder 2.0. Future releases of this product may change. The attachment at the bottom of this posting contains both, a 2005 version and a 2008 version of the final reports. The reports are based on the Northwind sample database (download link).
On-Demand Report Processing
The new processing engine in Reporting Services 2008 still retrieves datasets upfront, but only pre-computes certain invariants, such as grouping, sorting, filter expressions, aggregates, subreport parameters and queries. Everything else are "on-demand" evaluated expressions; most notably, textbox values, and style expressions.
Furthermore, the processing engine now exposes a cursor-based report structure as so-called RenderingObjectModel. Rendering extensions, responsible for translating the processed report to the desired output format, traverse the report using a hierarchy of RenderingObjectModel cursors. This is in contrast to the processing engine in 2005 and 2000 in which the entire report was fully processed upfront. A couple implications of this on-demand model are that
a) objects are evaluated hierarchically throughout the report,
b) hidden textboxes are not evaluated, and
c) the concept of Report and Group Variables has been introduced
Report and Group Variables
In Reporting Services 2008 / RDL 2008/01 namespace, one can declare variables that are global throughout the report or local to particular group scopes and refer to them in expressions. Report and group variables can only be set/initialized once and have a read-only semantics.
Typical use cases for variables include:
- Caching values:
Report/group variables can be used to make an expensive call to an external assembly once, cache the result, and then reference the variable value from other expressions in the report.
-.
The latter use case of group variables will be discussed in more detail in the remainder of this blog posting to implement custom aggregation in a Reporting Services 2008 report.
Custom Aggregate Scenario
The scenario discussed is one where a report author implements a custom aggregate, illustrated by an implementation of a Median function.
A common pattern for implementing a custom aggregate such as Median in Reporting Services 2005 is like this.
With the custom code for GetMedian and AddValue as follows:
Dim values As System.Collections.ArrayList
Function AddValue(ByVal newValue As Decimal)
If (values Is Nothing) Then
values = New System.Collections.ArrayList()
End If
values.Add(newValue)
End Function
Function GetMedian() As Decimal
Dim count As Integer = values.Count
If (count > 0) Then
values.Sort()
GetMedian = values(count / 2)
End If
End Function
What happens here in SSRS 2005 is that for each instance of the detail row, the value gets passed to AddValue() and then added to the values ArrayList. A textbox in the Table header, then, makes a call to GetMedian() which performs a calculation on the values in the ArrayList, and displays it.
It's important to note that this wasn't exactly supported in SSRS 2005 and it wouldn't even work properly in most cases. For example, if you were to add end-user sorting to the table, then the processing would go through a different code path that would evaluate the headers before the details. This would mean that the GetMedian() function would be called before AddValue has a chance to add any values are added to the ArrayList. It just so happens that in this particular case, when there is no end-user sort, the details are processed first.
Whether or not it was officially supported, a number of people got this to work and are relying on this behavior. In order for the same pattern to work in SSRS 2008, the report needs to be slightly redesigned. Detailed, step-by-step, instructions are provided below. Note that the pattern of using group variables outlined below is not limited to custom aggregation, but can be expanded into more complex solutions. We can show you the path and the pattern, but you will have to apply it to your unique situation. YMMV (your mileage may vary).
Implementing this in SSRS 2008: Step-by-Step
The report needs to be slightly revamped in 2008 in order for this to work. The custom code itself, however, doesn't have to change at all. This is going to be a step-by-step procedure by which you can port this pattern from your 2005 report to 2008. Note that the attachment at the bottom of this post contains a 2005 report and a 2008 report, both implementating this custom aggregation approach.
- Open the report in Business Intelligence Development Studio (BIDS) 2008 or Report Builder 2.0. When you open the 2005 report in a 2008-based tool, the RDL schema will be automatically upgraded to 2008. This is what you'll see in Report Builder 2.0:
- In on-demand processing, items are generally evaluated from the top-down. This means that in order to add the values of your detail rows into the ArrayList from which you will calculate the Median value, you need to add a "dummy" tablix to your report with its own detail row. This row can be hidden, as it's used solely for calculation purposes. Specifically, its purpose is to make calls to the AddValue function to populate the ArrayList. So that this table can "share" values with the table that will be visually presented in the report, it they both need to be part of the same table. Add a single static row above the header row in the table. Right-click in the blue Product Name cell, and select Insert Row > Above. In the newly inserted row, merge all of the cells. This is what you should see:
- Click the new cell (as shown above), and from the Insert tab on the Ribbon, select Table. Delete the top row from the newly created table, and merge the cells together:
- Select the detail group for this new inner table, and set the Hidden property to True. Since this is used only for calculations, it doesn't need to be visible in the rendered output of the report:
- Now, you need to add the call to the AddValues function within the context of the nested table. As I mentioned above, a hidden textbox's value will not be evaluated due to the new on-demand processing architecture. In order to make sure the call to AddValues is made regardless of the visibility of the group, add it as a group Variable:
- The original rows of the table need to be slightly restructured so that the original row functions as a group header.
Step 1: Right click in the Product Name cell and select Add Group > Row group > Parent group
Step 2: In the Tablix group dialog: Group by: 0 (constant value); Select Add Group header
- Select the newly created group header textbox (with the 0 in it), right click, and select delete column. You should then have this:
- Copy the contents of the blue cells into the row below it, so that it's inside the group. Then, delete the row from which the values were copied. Re-add the blue background to the other row if you want. Now you should have this:
- Now, in order to properly retrieve the calculated Median value, you need to add the call to GetMedian into a group Variable for the group that contains the header where you want to value to be displayed. Select the group from the grouping pane, and add this group Variable:
CustomAggregate_Median.zip
Note: this posting provides an overview of when to consider using Report Variables and/or Group Variables,
You may have heard that the report processing engine of Reporting Services 2008 works fundamentally different
|
https://blogs.msdn.microsoft.com/robertbruckner/2008/07/21/using-group-variables-in-reporting-services-2008-for-custom-aggregation/?replytocom=23
|
CC-MAIN-2018-13
|
refinedweb
| 1,416
| 58.62
|
In this chapter, we will explore the first concepts related to clean code, starting with what it is and what it means. The main point of the chapter is to understand that clean code is not just a nice thing to have or a luxury in software projects. It's a necessity. Without quality code, the project will face the perils of failing due to an accumulated technical debt.
Along the same lines, but going into a bit more detail, are the concepts of formatting and documenting the code. This also might sound like a superfluous requirement or task, but again, we will discover that it plays a fundamental role in keeping the code base maintainable and workable.
We will analyze the importance of adopting a good coding guideline for this project. Realizing that maintaining the code align to the reference is a continuous task, and we will see how we can get help from automated tools that will ease our work. For this reason, we quickly discuss how to configure the main tools so that they automatically run on the project as part of the build.
After reading this chapter, you will have an idea of what clean code is, why it is important, why formatting and documenting the code are crucial tasks, and how to automate this process. From this, you should acquire the mindset for quickly organizing the structure of a new project, aiming for good code quality.
After reading this chapter, you will have learned the following:
- That clean code really means something far more important than formatting in software construction
- That even so, having a standard formatting is a key component to have in a software project, for the sake of its maintainability
- How to make the code self-documenting by using the features that Python provides
- How to configure tools to help arrange the layout of the code in a consistent way so that team members can focus on the essence of the problem
There is no sole or strict definition of clean code. Moreover, there is probably no way of formally measuring clean code, so you cannot run a tool on a repository that could tell you how good, bad, or maintainable or not that code is. Sure, you can run tools such as checkers, linters, static analyzers, and so on. And those tools are of much help. They are necessary, but not sufficient. Clean code is not something a machine or script could tell (so far), but rather something that us, as professionals, can decide.
For decades of using the terms programming languages, we thought that they were languages to communicate our ideas to the machine, so it can run our programs. We were wrong. That's not the truth, but part of the truth. The real language behind programming languages is to communicate our ideas to other developers.
Here is where the true nature of clean code lies. It depends on other engineers to be able to read and maintain the code. Therefore, we, as professionals, are the only ones who can judge this. Think about it; as developers, we spend much more time reading code than actually writing it. Every time we want to make a change or add a new feature, we first have to read all the surroundings of the code we have to modify or extend. The language (Python), is what we use to communicate among ourselves.
So, instead of giving you a definition (or my definition) of clean code, I invite you to go through the book, read all about idiomatic Python, see the difference between good and bad code, identify traits of good code and good architecture, and then come up with your own definition. After reading this book, you will be able to judge and analyze code for yourself, and you will have a more clear understanding of clean code. You will know what it is and what it means, regardless of any definition given to you.
There are a huge number of reasons why clean code is important. Most of them revolve around the ideas of maintainability, reducing technical debt, working effectively with agile development, and managing a successful project.
The first idea I would like to explore is in regards to agile development and continuous delivery. If we want our project to be able to successfully deliver features constantly at a steady and predictable pace, then having a good and maintainable code base is a must.
Imagine you are driving a car on a road toward a destination you want to reach at a certain point in time. You have to estimate your arrival time so that you can tell the person who is waiting for you. If the car works fine, and the road is flat and perfect, then I do not see why you would miss your estimation by a large margin. Now, if the road is broken and you have to step out to move rocks out of the way, or avoid cracks, stop to check the engine every few kilometers, and so on, then it is very unlikely that you will know for sure when are you going to arrive (or if you are). I think the analogy is clear; the road is the code. If you want to move at a steady, constant, and predictable pace, the code needs to be maintainable and readable. If it is not, every time product management asks for a new feature, you will have to stop to refactor and fix the technical debt.
Technical debt refers to the concept of problems in the software as a result of a compromise, and a bad decision being made. In a way, it's possible to think about technical debt in two ways. From the present to the past. What if the problems we are currently facing are the result of previously written bad code? From the present to the future—if we decide to take the shortcut now, instead of investing time in a proper solution, what problems are we creating for ourselves in the future?
The word debt is a good choice. It's a debt because the code will be harder to change in the future than it would be to change it now. That incurred cost is the interests of the debt. Incurring in technical debt means that tomorrow, the code will be harder and more expensive (it would be possible to even measure this) than today, and even more expensive the day after, and so on.
Every time the team cannot deliver something on time and has to stop to fix and refactor the code is paying the price of technical debt.
The worst thing about technical debt is that it represents a long-term and underlying problem. It is not something that raises a high alarm. Instead, it is a silent problem, scattered across all parts of the project, that one day, at one particular time, will wake up and become a show-stopper.
Is clean code about formatting and structuring the code, according to some standards (for example, PEP-8, or a custom standard defined by the project guidelines)? The short answer is no.
Clean code is something else that goes way beyond coding standards, formatting, linting tools, and other checks regarding the layout of the code. Clean code is about achieving quality software and building a system that is robust, maintainable, and avoiding technical debt. A piece of code or an entire software component could be 100% with PEP-8 (or any other guideline), and still not satisfy these requirements.
However, not paying attention to the structure of the code has some perils. For this reason, we will first analyze the problems with a bad code structure, how to address them, and then we will see how to configure and use tools for Python projects in order to automatically check and correct problems.
To sum this up, we can say that clean code has nothing to do with things like PEP-8 or coding styles. It goes way beyond that, and it means something more meaningful to the maintainability of the code and the quality of the software. However, as we will see, formatting the code correctly is important in order to work efficiently.
A coding guideline is a bare minimum a project should have to be considered being developed under quality standards. In this section, we will explore the reasons behind this, so in the following sections, we can start looking at ways to enforce this automatically by the means of tools.
The first thing that comes to my mind when I try to find good traits in a code layout is consistency. I would expect the code to be consistently structured so that it is easier to read and follow. If the code is not correct or consistently structured, and everyone on the team is doing things in their own way, then we will end up with code that will require extra effort and concentration to be followed correctly. It will be error-prone, misleading, and bugs or subtleties might slip through easily.
We want to avoid that. What we want is exactly the opposite of that—code that we can read and understand as quickly as possible at a single glance.
If all members of the development team agree on a standardized way of structuring the code, the resulting code would look much more familiar. As a result of that, you will quickly identify patterns (more about this in a second), and with these patterns in mind, it will be much easier to understand things and detect errors. For example, when something is amiss, you will notice that somehow, there is something odd in the patterns you are used to seeing, which will catch your attention. You will take a closer look, and you will more than likely spot the mistake!
As it was stated in the classical book, Code Complete, an interesting analysis of this was done on the paper titled Perceptions in Chess (1973), where an experiment was conducted in order to identify how different people can understand or memorize different chess positions. The experiment was conducted on players of all levels (novices, intermediate, and chess masters), and with different chess positions on the board. They found out that when the position was random, the novices did as well as the chess masters; it was just a memorization exercise that anyone could do at reasonably the same level. When the positions followed a logical sequence that might occur in a real game (again, consistency, adhering to a pattern), then the chess masters performed exceedingly better than the rest.
Now imagine this same situation applied to software. We, as the software engineers experts in Python, are like the chess masters in the previous example. When the code is structured randomly, without following any logic, or adhering to any standard, then it would be as difficult for us to spot mistakes as a novice developer. On the other hand, if we are used to reading code in a structured fashion, and we have learned to quickly get the ideas from the code by following patterns, then we are at a considerable advantage.
In particular, for Python, the sort of coding style you should follow is PEP-8. You can extend it or adopt some of its parts to the particularities of the project you are working on (for example, the length of the line, the notes about strings, and so on). However, I do suggest that regardless of whether you are using just plain PEP-8 or extending it, you should really stick to it instead of trying to come up with another different standard from scratch.
The reason for this is that this document already takes into consideration many of the particularities of the syntax of Python (that would not normally apply for other languages), and it was created by core Python developers who actually contributed to the syntax of Python. For this reason, it is hard to think that the accuracy of PEP-8 can be otherwise matched, not to mention, improved.
In particular, PEP-8 has some characteristics that carry other nice improvements when dealing with code, such as following:
- Grepability: This is the ability to grep tokens inside the code; that is, to search in certain files (and in which part of those files) for the particular string we are looking for. One of the items introduced by this standard is something that differentiates the way of writing the assignment of values to variables, from the keyword arguments being passed to functions.
To see this better, let's use an example. Let's say we are debugging, and we need to find where the value to a parameter named
location is being passed. We can run the following
grep command, and the result will tell us the file and the line we are looking for:
$ grep -nr "location=" . ./core.py:13: location=current_location,
Now, we want to know where this variable is being assigned this value, and the following command will also give us the information we are looking for:
$ grep -nr "location =" . ./core.py:10: current_location = get_location()
PEP-8 establishes the convention that, when passing arguments by keyword to a function, we don't use spaces, but we do when we assign variables. For that reason, we can adapt our search criteria (no spaces around the
= on the first search, and one space on the second) and be more efficient in our search. That is one of the advantages of following a convention.
- Consistency: If the code looks like a uniform format, the reading of it will be much easier. This is particularly important for onboarding, if you want to welcome new developers to your project, or even hire new (and probably less experienced) programmers on your team, and they need to become familiar with the code (which might even consist of several repositories). It will make their lives much easier if the code layout, documentation, naming convention, and such is identical across all files they open, in all repositories.
- Code quality:By looking at the code in a structured fashion, you will become more proficient at understanding it at a glance (again, like in Perception in Chess), and you will spot bugs and mistakes more easily. In addition to that, tools that check for the quality of the code will also hint at potential bugs. Static analysis of the code might help to reduce the ratio of bugs per line of code.
This section is about documenting the code in Python, from within the code. Good code is self-explanatory but is also well-documented. It is a good idea to explain what it is supposed to do (not how).
One important distinction; documenting the code is not the same as adding comments on it. Comments are bad, and they should be avoided. By documentation, we refer to the fact of explaining the data types, providing examples of them, and annotating the variables.
This is relevant in Python, because being dynamically typed, it might be easy to get lost on the values of variables or objects across functions and methods. For this reason, stating this information will make it easier for future readers of the code.
There is another reason that specifically relates to annotations. They can also help in running some automatic checks, such as type hinting, through tools such as Mypy. We will find that, in the end, adding annotations pays off.
In simple terms, we can say that docstrings are basically documentation embedded in the source code. A docstring is basically a literal string, placed somewhere in the code, with the intention of documenting that part of the logic.
Notice the emphasis on the word documentation. This subtlety is important because it's meant to represent explanation, not justification. Docstrings are not comments; they are documentation.
Having comments in the code is a bad practice for multiple reasons. First, comments represent our failure to express our ideas in the code. If we actually have to explain why or how we are doing something, then that code is probably not good enough. For starters, it fails to be self-explanatory. Second, it can be misleading. Worst than having to spend some time reading a complicated section is to read a comment on how it is supposed to work, and figuring out that the code actually does something different. People tend to forget to update comments when they change the code, so the comment next to the line that was just changed will be outdated, resulting in a dangerous misdirection.
Sometimes, on rare occasions, we cannot avoid having comments. Maybe there is an error on a third-party library that we have to circumvent. In those cases, placing a small but descriptive comment might be acceptable.
With docstrings, however, the story is different. Again, they do not represent comments, but the documentation of a particular component (a module, class, method, or function) in the code. Their use is not only accepted but also encouraged. It is a good practice to add docstrings whenever possible.
The reason why they are a good thing to have in the code (or maybe even required, depending on the standards of your project) is that Python is dynamically typed. This means that, for example, a function can take anything as the value for any of its parameters. Python will not enforce, nor check, anything like this. So, imagine that you find a function in the code that you know you will have to modify. You are even lucky enough that the function has a descriptive name, and that its parameters do as well. It might still not be quite clear what types you should pass to it. Even if this is the case, how are they expected to be used?
Here is where a good docstring might be of help. Documenting the expected input and output of a function is a good practice that will help the readers of that function understand how it is supposed to work.
Consider this good example from the standard library:
In [1]: dict.update?? Docstring: D.update([E, ]**F) -> None. Update D from dict/iterable E and F.] Type: method_descriptor
Here, the docstring for the
update method on dictionaries gives us useful information, and it is telling us that we can use it in different ways:
- We can pass something with a
.keys()method (for example, another dictionary), and it will update the original dictionary with the keys from the object passed per parameter:
>>> d = {} >>> d.update({1: "one", 2: "two"}) >>> d {1: 'one', 2: 'two'}
- We can pass an iterable of pairs of keys and values, and we will unpack them to
update:
>>> d.update([(3, "three"), (4, "four")]) >>> d {1: 'one', 2: 'two', 3: 'three', 4: 'four'}
In any case, the dictionary will be updated with the rest of the keyword arguments passed to it.
This information is crucial for someone that has to learn and understand how a new function works, and how they can take advantage of it.
Notice that in the first example, we obtained the docstring of the function by using the double question mark on it (
dict.update??). This is a feature of the IPython interactive interpreter. When this is called, it will print the docstring of the object you are expecting. Now, imagine that in the same way, we obtained help from this function of the standard library; how much easier could you make the lives of your readers (the users of your code), if you place docstrings on the functions you write so that others can understand their workings in the same way?
The docstring is not something separated or isolated from the code. It becomes part of the code, and you can access it. When an object has a docstring defined, this becomes part of it via its
__doc__ attribute:
>>> def my_function(): ... """Run some computation""" ... return None ... >>> my_function.__doc__ 'Run some computation'
This means that it is even possible to access it at runtime and even generate or compile documentation from the source code. In fact, there are tools for that. If you run Sphinx, it will create the basic scaffold for the documentation of your project. With the
autodoc extension (
sphinx.ext.autodoc) in particular, the tool will take the docstrings from the code and place them in the pages that document the function.
Once you have the tools in place to build the documentation, make it public so that it becomes part of the project itself. For open source projects, you can use read the docs, which will generate the documentation automatically per branch or version (configurable). For companies or projects, you can have the same tools or configure these services on-premise, but regardless of this decision, the important part is that the documentation should be ready and available to all members of the team.
There is, unfortunately, one downside to docstrings, and it is that, as it happens with all documentation, it requires manual and constant maintenance. As the code changes, it will have to be updated. Another problem is that for docstrings to be really useful, they have to be detailed, which requires multiple lines.
Maintaining proper documentation is a software engineering challenge that we cannot escape from. It also makes sense to be like this. If you think about it, the reason for documentation to be manually written is because it is intended to be read by other humans. If it were automated, it would probably not be of much use. For the documentation to be of any value, everyone on the team must agree that it is something that requires manual intervention, hence the effort required. The key is to understand that software is not just about code. The documentation that comes with it is also part of the deliverable. Therefore, when someone is making a change on a function, it is equally important to also update the corresponding part of the documentation to the code that was just changed, regardless of whether its a wiki, a user manual, a
README file, or several docstrings.
PEP-3107 introduced the concept of annotations. The basic idea of them is to hint to the readers of the code about what to expect as values of arguments in functions. The use of the word hint is not casual; annotations enable type hinting, which we will discuss later on in this chapter, after the first introduction to annotations.
Annotations let you specify the expected type of some variables that have been defined. It is actually not only about the types, but any kind of metadata that can help you get a better idea of what that variable actually represents.
Consider the following example:
class Point: def __init__(self, lat, long): self.lat = lat self.long = long def locate(latitude: float, longitude: float) -> Point: """Find an object in the map by its coordinates"""
Here, we use
float to indicate the expected types of
latitude and
longitude. This is merely informative for the reader of the function so that they can get an idea of these expected types. Python will not check these types nor enforce them.
We can also specify the expected type of the returned value of the function. In this case,
Point is a user-defined class, so it will mean that whatever is returned will be an instance of
Point.
However, types or built-ins are not the only kind of thing we can use as annotations. Basically, everything that is valid in the scope of the current Python interpreter could be placed there. For example, a string explaining the intention of the variable, a callable to be used as a callback or validation function, and so on.
With the introduction of annotations, a new special attribute is also included, and it is
__annotations__. This will give us access to a dictionary that maps the name of the annotations (as keys in the dictionary) with their corresponding values, which are those we have defined for them. In our example, this will look like the following:
>>> locate.__annotations__ {'latitude': float, 'longitue': float, 'return': __main__.Point}
We could use this to generate documentation, run validations, or enforce checks in our code if we think we have to.
Speaking of checking the code through annotations, this is when PEP-484 comes into play. This PEP specifies the basics of type hinting; the idea of checking the types of our functions via annotations. Just to be clear again, and quoting PEP-484 itself:
"Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention."
The idea of type hinting is to have extra tools (independent from the interpreter) to check and assess the correct use of types throughout the code and to hint to the user in case any incompatibilities are detected. The tool that runs these checks, Mypy, is explained in more detail in a later section, where we will talk about using and configuring the tools for the project. For now, you can think of it as a sort of linter that will check the semantics of the types used on the code. This sometimes helps in finding bugs early on, when the tests and checks are run. For this reason, it is a good idea to configure Mypy on the project and use it at the same level as the rest of the tools for static analysis.
However, type hinting means more than just a tool for checking the types on the code. Starting with Python 3.5, the new typing module was introduced, and this significantly improved how we define the types and the annotations in our Python code.
The basic idea behind this is that now the semantics extend to more meaningful concepts, making it even easier for us (humans) to understand what the code means, or what is expected at a given point. For example, you could have a function that worked with lists or tuples in one of its parameters, and you would have put one of these two types as the annotation, or even a string explaining it. But with this module, it is possible to tell Python that it expects an iterable or a sequence. You can even identify the type or the values on it; for example, that it takes a sequence of integers.
There is one extra improvement made in regards to annotations at the time of writing this book, and that is that starting from Python 3.6, it is possible to annotate variables directly, not just function parameters and return types. This was introduced in PEP-526, and the idea is that you can declare the types of some variables defined without necessarily assigning a value to them, as shown in the following listing:
class Point: lat: float long: float >>> Point.__annotations__ {'lat': <class 'float'>, 'long': <class 'float'>}
This is a valid question, since on older versions of Python, long before annotations were introduced, the way of documenting the types of the parameters of functions or attributes was done by putting docstrings on them. There are even some conventions on formats on how to structure docstrings to include the basic information for a function, including types and meaning of each parameter, type, and meaning of the result, and possible exceptions that the function might raise.
Most of this has been addressed already in a more compact way by means of annotations, so one might wonder if it is really worth having docstrings as well. The answer is yes, and this is because they complement each other.
It is true that a part of the information previously contained on the docstring can now be moved to the annotations. But this should only leave more room for a better documentation on the docstring. In particular, for dynamic and nested data types, it is always a good idea to provide examples of the expected data so that we can get a better idea of what we are dealing with.
Consider the following example. Let's say we have a function that expects a dictionary to validate some data:
def data_from_response(response: dict) -> dict: if response["status"] != 200: raise ValueError return {"data": response["payload"]}
Here, we can see a function that takes a dictionary and returns another dictionary. Potentially, it can raise an exception if the value under the key
"status" is not the expected one. However, we do not have much more information about it. For example, what does a correct instance of a
response object look like? What would an instance of
result look like? To answer both of these questions, it would be a good idea to document examples of the data that is expected to be passed in by a parameter and returned by this function.
Let's see if we can explain this better with the help of a docstring:
def data_from_response(response: dict) -> dict: """If the response is OK, return its payload. - response: A dict like:: { "status": 200, # <int> "timestamp": "....", # ISO format string of the current date time "payload": { ... } # dict with the returned data } - Returns a dictionary like:: {"data": { .. } } - Raises: - ValueError if the HTTP status is != 200 """ if response["status"] != 200: raise ValueError return {"data": response["payload"]}
Now, we have a better idea of what is expected to be received and returned by this function. The documentation serves as valuable input, not only for understanding and getting an idea of what is being passed around, but also as a valuable source for unit tests. We can derive data like this to use as input, and we know what would be the correct and incorrect values to use on the tests. Actually, the tests also work as actionable documentation for our code, but this will be explained in more detail.
The benefit is that now we know what the possible values of the keys are, as well as their types, and we have a more concrete interpretation of what the data looks like. The cost is that, as we mentioned earlier, it takes up a lot of lines, and it needs to be verbose and detailed to be effective.
In this section, we will explore how to configure some basic tools and automatically run checks on the code, with the goal of leveraging part of the repetitive verification checks.
This is an important point: remember that code is for us, people, to understand, so only we can determine what is good or bad code. We should invest time in code reviews, thinking about what good code is, and how readable and understandable it is. When looking at the code written by a peer, you should ask such questions:
- Is this code easy to understand and follow for a fellow programmer?
- Does it speak in terms of the domain of the problem?
- Would a new person joining the team be able to understand it and work with it effectively?
As we saw previously, code formatting, consistent layout, and proper indentation are required but not sufficient traits to have in a code base. Moreover, this is something that we, as engineers with a high sense of quality, would take for granted, so we would read and write code far beyond the basic concepts of its layout. Therefore, we are not willing to waste time reviewing these kinds of items, so we can invest our time more effectively by looking at actual patterns in the code in order to understand its true meaning and provide valuable results.
All of these checks should be automated. They should be part of the tests or checklist, and this, in turn, should be part of the continuous integration build. If these checks do not pass, make the build fail. This is the only way to actually ensure the continuity of the structure of the code at all times. It also serves as an objective parameter for the team to have as a reference. Instead of having some engineers or the leader of the team always having to tell the same comments about PEP-8 on code reviews, the build will automatically fail, making it something objective.
Mypy () is the main tool for optional static type checking in Python. The idea is that, once you install it, it will analyze all of the files on your project, checking for inconsistencies on the use of the types. This is useful since, most of the time, it will detect actual bugs early, but sometimes it can give false positives.
You can install it with
pip, and it is recommended to include it as a dependency for the project on the setup file:
$ pip install mypy
Once it is installed in the virtual environment, you just have to run the preceding command and it will report all of the findings on the type checks. Try to adhere to its report as much as possible, because most of the time, the insights provided by it help to avoid errors that might otherwise slip into production. However, the tool is not perfect, so if you think it is reporting a false positive, you can ignore that line with the following marker as a comment:
type_to_ignore = "something" # type: ignore
There are many tools for checking the structure of the code (basically, this is compliance with PEP-8) in Python, such as pycodestyle (formerly known as PEP-8), Flake8, and many more. They all are configurable and are as easy to use as running the command they provide. Among all of them, I have found Pylint to be the most complete (and strict). It is also configurable.
Again, you just have to install it in the virtual environment with
pip:
$ pip install pylint
Then, just running the
pylint command would be enough to check it in the code.
It is possible to configure Pylint via a configuration file named
pylintrc.
In this file, you can decide the rules you would like to enable or disable, and parametrize others (for example, to change the maximum length of the column).
On Unix development environments, the most common way of working is through makefiles. Makefiles are powerful tools that let us configure commands to be run in the project, mostly for compiling, running, and so on. Besides this, we can use a makefile in the root of our project, with some commands configured to run checks of the formatting and conventions on the code, automatically.
A good approach for this would be to have targets for the tests, and each particular test, and then have another one that will run altogether. For example:
typehint: mypy src/ tests/ test: pytest tests/ lint: pylint src/ tests/ checklist: lint typehint test .PHONY: typehint test lint checklist
Here, the command we should run (both in our development machines and in the continuous integration environment builds) is the following:
make checklist
This will run everything in the following steps:
- It will first check the compliance with the coding guideline (PEP-8, for instance)
- Then it will check for the use of types on the code
- Finally, it will run the tests
If any of these steps fail, consider the entire process a failure.
Besides configuring these checks automatically in the build, it is also a good idea if the team adopts a convention and an automatic approach for structuring the code. Tools such as Black () automatically format the code. There are many tools that will edit the code automatically, but the interesting thing about Black is that it does so in a unique form. It's opinionated and deterministic, so the code will always end up arranged in the same way.
For example, Black strings will always be double-quotes, and the order of the parameters will always follow the same structure. This might sound rigid, but it's the only way to ensure the differences in the code are minimal. If the code always respects the same structure, changes in the code will only show up in pull requests with the actual changes that were made, and no extra cosmetic modifications. It's more restrictive than PEP-8, but it's also convenient because, by formatting the code directly through a tool, we don't have to actually worry about that, and we can focus on the crux of the problem at hand.
At the time of writing this book, the only thing that can be configured is the length of the lines. Everything else is corrected by the criteria of the project.
The following code is PEP-8 correct, but it doesn't follow the conventions of
black:
def my_function(name): """ >>> my_function('black') 'received Black' """ return 'received {0}'.format(name.title())
Now, we can run the following command to format the file:
black -l 79 *.py
Now, we can see what the tool has written:
def my_function(name): """ >>> my_function('black') 'received Black' """ return "received {0}".format(name.title())
On more complex code, a lot more would have changed (trailing commas, and more), but the idea can be seen clearly. Again, it's opinionated, but it's also a good idea to have a tool that takes care of details for us. It's also something that the Golang community learned a long time ago, to the point that there is a standard tool library,
got fmt, that automatically formats the code according to the conventions of the language. It's good that Python has something like this now.
These tools (Black, Pylint, Mypy, and many more) can be integrated with the editor or IDE of your choice to make things even easier. It's a good investment to configure your editor to make these kinds of modifications either when saving the file or through a shortcut.
We now have a first idea of what clean code is, and a workable interpretation of it, which will serve us as a reference point for the rest of this book.
More importantly, we understood that clean code is something much more important than the structure and layout of the code. We have to focus on how the ideas are represented on the code to see if they are correct. Clean code is about readability, maintainability of the code, keeping technical debt to the minimum, and effectively communicating our ideas into the code so that others can understand the same thing we intended to write in the first place.
However, we discussed that the adherence to coding styles or guidelines is important for multiple reasons. We have agreed that this is a condition that is necessary, but not sufficient, and since it is a minimal requirement every solid project should comply with, it is clear that is something we better leave to the tools. Therefore, automating all of these checks becomes critical, and in this regard, we have to keep in mind how to configure tools such as Mypy, Pylint, and more.
The next chapter is going to be more focused on the Python-specific code, and how to express our ideas in idiomatic Python. We will explore the idioms in Python that make for more compact and efficient code. In this analysis, we will see that, in general, Python has different ideas or different ways to accomplish things compared to other languages.
|
https://www.packtpub.com/product/clean-code-in-python/9781788835831
|
CC-MAIN-2021-17
|
refinedweb
| 6,549
| 66.88
|
Description
Various small tools that help with C++ code editing.
Package Information
Installation
To use this package, put the following dependency into your project's dependencies section:
Readme
Simple Tooling for C++ Code
Various small tools that help with C++ code editing build on top of Facebook's C++ Linter
flint
flint is published at. It's
Tokenizer.d is the basis of this whole project.
Fun with D ranges
This is all about D ranges and the fun to create something with it.
Central idea
We get an array of proper C++ tokens thanks to
Tokenizer.d. Based on this we
create a nested structure of
Entity objects by looking for pairs for curly
braces in
Scanner.d.
At this point various custom ranges can be used to transform the tokens and entities.
Utilities
CxxImplement.d: Given a header/source file pair, add new functions from header to source
CxxMerge.d: Merge multiple C++ files while preserving the namespace structure
CxxSortFunctions.d: Sort functions by name while preserving the namespace structure
These utils are very specific and will not work on your files.
Custom Ranges
MergedRange.d: Merge multiple token arrays into one
SortRange.d: Sort functions with namespaces in token array
TokenRange.d: Map tokens to entities
TreeRange.d: Make a range out of a nested tree structure
UnifyRange.d: Unify functions in namespaces
License
Distributed under the Boost license.
|
http://code.dlang.org/packages/tooling
|
CC-MAIN-2017-17
|
refinedweb
| 232
| 66.13
|
Hi,
I've been programming some examples out of a book and I've noticed something about getline that's made me curious. In the following simple program, you enter a make and model of a car, and then it outputs it.
But say you change the line "cin.getline(make, 19);" to "cin.getline(make, 3);". Then, if you enter something like "ford" or anything longer than that, the program no longer asks you the second question. How come?
Thanks.Thanks.Code:#include <iostream> using namespace std; char model[20]; char make[20]; int main() { cout << "Enter make: "; cin.getline(make, 19); cout << "Enter model: "; cin.getline(model, 19); cout << make << " " << model; return 0; }
|
http://cboard.cprogramming.com/cplusplus-programming/115370-question-about-getline.html
|
CC-MAIN-2015-22
|
refinedweb
| 115
| 75.81
|
OBJECTIVES
- Present the GSM / GPRS card based on the SIM900 module.
- Learn to connect and start it.
- Use it to make calls and send SMS.
BILL OF MATERIALS
GSM AND GPRS
We have seen several ways to connect our Arduino to the outside, such as Ethernet and Wi-Fi, but we may ever want to communicate with it and we do not have access to any of these networks, or we do not want to depend on them. It can be very useful, for example, to place in a weather station.
For this type of purpose we can use a GSM / GPRS module with a SIM card, so that we can communicate with it as if it were a mobile phone. And this card based on the SIM900 module allows us to send and receive calls and SMS and connect to the Internet, transforming our Arduino into a mobile phone.
The GSM (Global System for Global Communications) is the communication system that is most used in mobile phones and is a standard in Europe. The first functionality of this technology is the transmission of voice, but also allows the transmission of data (SMS, Internet), yes, at a very low speed of 9kb / s.
GPRS (General Packet Radio Service) is an extension of GSM based on packet transmission that offers a more efficient service for data communications, especially in the case of Internet access. The maximum speed (in theory) of the GPRS is 171kb / s, although in practice it is much smaller.
CONNECTIONS AND STARTUP
The module that we are going to use for the tutorial does not have pins to mount it directly on the Arduino, but there are models that do allow it. In any case, the connection is very simple.
This card is based on the SIM900 module, and we will configure and control it via UART using the AT commands.
First we will have to place the SIM card that we are going to use. The adapter for the cards is the size of normal SIMs, which have mostly been replaced by MicroSIM and NanoSim, so we will have to do with one or hook one of these in an adapter (the module we used in the session and you can get in the store this page includes adapters for all card sizes). With a little patience and expertise we could also place a nano or a mic directly, but in danger we will move and stop making contact.
Before connecting anything to the Arduino, we will place the jumpers that have the card so that we use to communicate pins 7 and 8.
To connect it to our Arduino we have two options, and for both we will need an external power supply, since to connect the Arduino by USB it will be enough to turn on the module, but not to also feed the card. If we connect the external power to the GPRS shield and feed the Arduino with the USB or a separate source, we will only have to join the pins 7 and 8 for the serial communication and the GND between both cards.
- If you use an Arduino Mega, the connection would be from pins 7 and 8 of the GPRS to pins 10 and 11 of the Mega.
- In the programming we would also have to change the instruction in which we define the Software Serial instance that we created and put pins 10 and 11 (we will remember it later).
If we connect the external power to the Arduino to part of the three previous connections we will have to join the 5V pins of the Arduino and the GPRS.
To turn on the shield we have to place the switch in the correct position. The two positions correspond to each of the types of connection that we have explained above. Once we do this, two LEDs will light
To activate the power of the SIM card we also find two options. We can do it manually by pressing the button on one of the sides for one or two seconds and we will see that another LED lights up, and that one of the ones that had been on before starts flashing once per second. This blinking indicates that you are looking for the network. When it finds and connects it, it will change the blinking frequency and it will do it every 3 or 4 seconds. Of course, will not connect until we enter the SIM PIN, we’ll see now how to do it.
We can also activate it by program, but first we will have to carry out a welding on the pad “R13” that is right next to the red pin strip “J16”. And once this is done we will activate it by sending a 1 second pulse to pin 9 of the card. So if you want to use this type of activation you will have to add a cable between a digital pin (in our case 9) and pin 9 of the card that we have connected. Yes, remember that if the card is activated and we put the pulse back, what we will do is turn it off.
Checking AT commands
To be able to communicate via AT commands we will have to load a program for serial communication as we have done so many times before. We will create an instance called SIM808 and select the Arduino pins that we want to use to communicate (Rx and Tx). We have chosen 7 and 8, but you can use whatever is compatible with the library. You can also change the communication speed, but it must be the same for the serial port and for the instance that we have created. We have chosen 19200 because it is the one that uses the SIM900, so that we can use the programs that we already have of those sessions.
#include <SoftwareSerial.h> SoftwareSerial SIM900(7, 8); //7 as Rx 8 as Tx void setup() { SIM900.begin(19200); Serial.begin(19200); delay(1000); } void loop() { //Send and receive at commands if (Serial.available() > 0) SIM808.write(Serial.read()); if (SIM808.available() > 0) Serial.write(SIM808.read()); }
Once we have loaded the program we open the serial monitor and select the correct speed. The first AT command will serve us simply to know if the module responds and that communication therefore works. And this command is simply AT, we write it and press ENTER. You should respond with an OK; if we should not review that everything is in order: connections, ignition and correct speed
Checked this, we can enter the PIN of the SIM card, which is done by the command AT + CPIN = “XXXX”; where you will have to replace XXXX with the corresponding pin, in my case 1867, that is, AT + CPIN = “1867”. We will get a reply message indicating if the PIN is correct, and if it is, the LED that flashed once per second will do so every 3 seconds (more or less), indicating that it has found the mobile network.
And now we are ready to send and receive calls and connect to the Internet. To check that you can actually call the phone number of the SIM that you have put in the module, or use the command ATDXXXXXXXXX; (replacing the X with the phone number and respecting the “;”) to call whoever you want. And if you connect a helmet and a microphone to the module you can talk as if it were a normal phone.
Or if you call the phone number of the SIM that you have placed, you will hear a tone and in the serial monitor you will see how “RING” appears.
Making calls
PTo communicate with the module we will use AT commands, but the first thing we will do is include the SoftwareSerial.h library and configure the communication by pins 7 and 8 (10 and 11 for the Arduino Mega).
#include <SoftwareSerial.h> SoftwareSerial SIM900(7, 8); // 10 and 11 for Arduino Mega
The module uses a communication speed of 19200KHz, so in the setup we will configure the serial port for the SIM900 and for Arduino at that speed.
In addition we will introduce the PIN of the card to unblock it using AT commands and we will give you a little time to connect to the network. The command that we will use is AT + CPIN = “XXXX”, where we will replace XXXX with the PIN of our card.
- Remember that since we are going to put the command inside a println, we will have to use the \ before each “so that they do not behave like a special character.
void setup() { //digitalWrite(9, HIGH); //Software turn on //delay(1000); //digitalWrite(9, LOW); delay (5000); SIM900.begin(19200); //SIM900 serial port speed Serial.begin(19200); //CArduino serial port speed Serial.println("OK"); delay (1000); SIM900.println("AT+CPIN=\"XXXX\""); //SIM card pin at command delay(25000); //time to find a network
We will create a function in which we will insert the necessary instructions to call and hang up the call, also using AT commands. To call we will use the command ATDXXXXXXXXX, (replacing the X with the number we want to call) and to hang ATH.
We do not have to be stingy with the time we keep the call, because sometimes it may take time to start giving tone and otherwise we could hang up before receiving anything on our mobile.
void call() { Serial.println("Calling..."); SIM900.println("ATDXXXXXXXXX;"); //Call AT command delay(20000); SIM900.println("ATH"); //call dangle delay(1000); Serial.println("Call end"); }
And we only have to call the function when we want. In this example we simply have you call once and stay on hold indefinitely.
void loop() { call(); //Calling while (1); }
Sending SMS
The programming to send an SMS will be identical, but we will create another function that will be responsible for sending the AT commands to send the SMS.
First we will use the command AT + CMGF = 1 \ r to indicate to the GPRS module that we are going to send a message, and then we enter the number to which it is addressed with the command AT + CMGS = “XXXXXXXXX”.
Once this is done, we simply send the content of the message and end it with the character ^Z. The function would look like this:
void sending_sms() { Serial.println("Sending SMS..."); SIM900.print("AT+CMGF=1\r"); //Configure text mode to send or receive SMS delay(1000); SIM900.println("AT+CMGS=\"XXXXXXXXX\""); //Destintaion number delay(1000); SIM900.println("SMS sent from Prometec."); //SMS text delay(100); SIM900.println((char)26); //End command ^Z delay(100); SIM900.println(); delay(5000); Serial.println("SMS sent"); }
Here we leave a video for you to see how it has gone to us:
In the next session we will learn to receive calls and SMS. And, if you have left wanting more, we recommend that you take a look at this session. In it you will find a more worked program in which, among other things, we are going to use a function to send the AT commands and make sure that the module response is what we expect.
Summary
In this session we have learned several important things:
- We can make our Arduino behave like a mobile phone.
- How to connect and configure the GPRS card.
- To send calls and SMS to any phone.
Hi, I uploaded the code for the checking that the AT commands work and I got this error:
SIM808 was not declared in this scope
|
https://prometec.org/communications/gprs-sim900/call-send-sms/
|
CC-MAIN-2021-49
|
refinedweb
| 1,933
| 66.88
|
SerialPort<PortNumber, RxBufSize, TxBufSize>
#include <SerialPort.h>SerialPort<0, 0, 0> NewSerial;void setup() { NewSerial.begin(9600); NewSerial.write("Hello World\r\n");}void loop() {}
#include <SerialPort.h>SerialPort<0, 32, 0> NewSerial;void setup() { NewSerial.begin(9600); NewSerial.write("Hello World\r\n");}void loop() {}
struct SerialRingBuffer { uint8_t* buf; /**< Pointer to start of buffer. */ uint16_t head; /**< Index to next empty location. */ uint16_t tail; /**< Index to last entry if head != tail. */ uint16_t size; /**< Size of the buffer. Capacity is size -1. */};
void flushRx() { if (RxBufSize) { rxbuf_->head = rxbuf_->tail; } else { // put correct code here///////////////////////////////////////////////////////////// usart_->udr; usart_->udr; } }
void flushRx() { rxbuf_->head = rxbuf_->tail; }
void end() { // wait for transmission of outgoing data while (txbuf_->head != txbuf_->tail) {} usart_->ucsrb &= ~((1 << B_RXEN) | (1 << B_TXEN) | (1 << B_RXCIE) | (1 << B_UDRIE)); // clear any received data flushRx(); // <<<<<<<< prevent duplicate code? }
void end() { // wait for transmission of outgoing data flushTx(); usart_->ucsrb &= ~((1 << B_RXEN) | (1 << B_TXEN) | (1 << B_RXCIE) | (1 << B_UDRIE)); // clear any received data flushRx();}void flushRx() { if (RxBufSize) { rxbuf_->flush(); } else { // empty USART fifo usart_->flush(); } }void flushTx() { if (TxBufSize) { while (!txbuf_->empty()) {} }}// empty USART fifovoid SerialRegisters::flush() { uint8_t b; while (ucsra & (1 << B_RXC)) b = udr;}
For receive parity errors it would be difficult to flag each character since I buffer 8-bits. I could set a flag indicating a parity error occurred in some character and the user could check and clear it. Is that adequate?
The avr USART is capable of character sizes of 5, 6, 7, 8, and 9 bits. Nine data bits would be difficult since I use an 8-bit data type for ring buffers.
You can't access the ring buffer when there is no buffer. It is possible that the ring buffer doesn't exist if BUFFERED_RX is zero.
// wait for TX ISR if buffer is full while (!txbuf_->put(b)) {} // enable interrupts usart_->ucsrb |= M_UDRIE;
#include <SerialPort.h>SerialPort<0, 63, 63> NewSerial;// use macro to substitute for Serial#define Serial NewSerialvoid setup() { Serial.begin(9600); uint32_t t = micros(); Serial.println("12345678901234567890"); t = micros() - t; Serial.print("micros: "); Serial.println(t);}void loop() {}
12345678901234567890micros: 220
I will save any receive error bits. This will include ring-buffer overrun, USART receive overrun, parity error, and framing error. I will or these bits into a variable that can be read or cleared by new API functions. I think I will only do this for buffered receive in the ISR. I mainly did unbuffered receive for the case where you only want to do output. I need to put the error variable in the ring buffer so the ISR can access it.
It probably would give a framing error.
retrolefty,I don't see a way to detect break in the avr USART. It probably would give a framing error.
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=85207.0
|
CC-MAIN-2015-18
|
refinedweb
| 495
| 59.19
|
Hello,
I am fairly new to c and i am finding it hard to spot why a part of a program I am making is not working. The program lets you work out the area and circumference of a circle. This is the code:
#include <stdio.h> #include <math.h> int area() ; int circ() ; int main() { char choice ; printf("\n Do you want circumference(c) or area(a): ") ; scanf("%c" , &choice ) ; if(choice = 'c') { int circ() ; } if(choice = 'a') { int area() ; } else { int main() ; } return 0 ; } int area() { float radius , pi = 3.141592 ; printf( "What is the radius: ") ; scanf( "%f" , &radius ) ; printf( "\n The Area is %f \n" , pi*(radius*radius) ) ; return 0 ; }
The issue is that after i type a letter in for the first input, it stops runninig the program. The compiler that I am using (cygwin) did not give me any errors.I have not finished yet.
Thank you
|
https://www.daniweb.com/programming/software-development/threads/440726/why-is-this-code-not-working
|
CC-MAIN-2019-04
|
refinedweb
| 152
| 79.09
|
itoa()
Convert an integer into a string, using a given base
Synopsis:
#include <stdlib.h> char* itoa( int value, char* buffer, int radix );
Since:
BlackBerry 10.0.0
Arguments:
- value
- The value to convert into a string.
- buffer
- A buffer in which the function stores the string. The size of the buffer must be at least:
8 × sizeof( int ) + 1
bytes when converting values in base 2 (binary).
- radix
- The base to use when converting the number.
If the value of radix is 10, and value is negative, then a minus sign is prepended to the result.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The itoa() function converts the integer value into the equivalent string in base radix notation, storing the result in the specified buffer. The function terminates the string with a NUL character.
Returns:
A pointer to the resulting string.
Examples:
#include <stdio.h> #include <stdlib.h> int main( void ) { char buffer[20]; int base; for( base = 2; base <= 16; base += 2 ) { printf( "%2d %s\n", base, itoa( 12765, buffer, base ) ); } return EXIT_SUCCESS; }
produces the output:
2 11000111011101 4 3013131 6 135033 8 30735 10 12765 12 7479 14 491b 16 31dd
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/i/itoa.html
|
CC-MAIN-2015-35
|
refinedweb
| 229
| 66.84
|
Laserjungle,
The way you wrote it causes an explosive number of function calls for increase of n. You're computing fib(n-1)+fib(n-2) each time, but you don't need to do that, because in the process of computing each value of fib(n-1), you also need to compute fib(n-2). That causes an unnecessary doubling of the code path, for each level of recursion, meaning that it is way longer than it needs to be (I think by a factor of n**(2**n), but I haven't done a proper analysis, so complexity buffs needn't bother to shoot me down).
The whole purpose of pycxx is to provide a function wrapper. The wrapper is doing a lot, and obviously it has quite a bit of overhead. By importing this overhead into each function call of an already horribly inefficient recursive algorithm, it's not surprising you get performance numbers you don't like.
To convince yourself of this, put a counter or a print statement into your code to show you how many times you are calling fib(). Once you are sufficiently horrified, Google for alternative Fibonacci implementations to reduce the number of times you have to go between Python and C++, and you should see a dramatic improvement.
Paul Keating
From: laserjungle@sina.com [mailto:laserjungle@sina.com]
Sent: 22 November 2011 04:40
To: cxx-users
Subject: the performance of pycxx
I wrote a py extension to calculate fibonacci sequence (i.e. 1 1 2 3 5 ... ) as following
however, I found it is too slow as the table says
so, what is the problem? Diid I do something wrong? thanks
M | speed for pure py | speed for pycxx
-----+-----------------------+--------------------
30 | 0.93 sec | 6.35 sec
40 | 116.64 sec | 789.91 sec
[py code]
import time
def fib(n):
if n<=1:
return 1
else:
return fib(n-1)+fib(n-2)
print fib(M)
st=time.time()
for i in range(M):
fib(i)
et=time.time()
print 'pure py=%.2f secs' % (et-st)
[/py code]
[pycxx code]
[/pycxx code]
|
https://sourceforge.net/p/cxx/mailman/attachment/95A85723CFE2B746894057FCD575CF7E5BC8438889@GMSPRDM04.prd.domain/1/
|
CC-MAIN-2017-17
|
refinedweb
| 352
| 73.58
|
Agenda
See also: IRC log
Date: 16 December 2010
<scribe> Meeting: 185
<scribe> Scribe: Norm
<scribe> ScribeNick: Norm
->
Accepted.
->
Accepted.
Skipping the rest of December for seasonal festivities.
No regrets heard.
->
Henry: I want to behave as much like XQuery as possible.
Alex: I think it's XSLT not XQuery that we're copying.
Vojtech: In the first version of the document there were different rules. Now we have rules more like XQuery wrt curly braces and quotes.
Some discussion about the current rules.
Vojtech: I think we're currently consistent with what XQuery does.
Norm: I think the only question is, if you see "}" in xpath-mode, do you look for another "}" or do you end the expression?
Vojtech: No, I proposed that if you're not in XPath mode and you see "}" then you just treat it literally.
Norm: So "}}" remains "}}"?
Vojtech: No. If you see "}}" you output "}", if you see an unescaped "}", you just recover from the escaping error and output the "}"
Alex: I think the current rules could allow nested expressions in the future.
Vojtech: I'm just observing that you could recover, I'm not pushing for it.
Norm: I think it will be confusing to do error recovery, so I propose not.
Alex: We could do that, and be done with it.
Proposal: Add a new rule to "regular-mode" which states that an unquoted "}" is an error.
Accepted.
Norm: I'll also add a note to the
xpath-mode section to note that "}" doesn't look for "}}", it
ends "greedily"
... I'll update the spec and republish it in our space, with a plan to make it an official note in January if no one sees any other problems.
->
Vojtech: Does the minimum profile
require parsing of namespaces?
... Now I think it's clear in the text.
... The other issue is the one that David Lee raised about having a much simpler profile.
Henry: That amounts to subsetting XML.
Norm: I'm very conflicted. I think what David Lee wants makes logical sense, but it's not clear that we have remit to go there.
More discussion about infosets and subsets of XML and the fact that our section 2 says we start with a namespace well-formed document.
Henry observes that if we named a smaller subset, then parsers that could do it only wouldn't be able to handle any well-formed XML which is not the case today.
Proposal: Politely decline on the basis that it wouldn't be XML. It's not illogical, but we can't go there.
Accepted.
Vojtech: What about my question about connecting output ports of compound steps?
->
Norm: I think 5.11 overstates what we intended.
Vojtech: Our implementation allows it.
Norm: Yeah, I guess it makes sense. I don't think my implementation allows it but that's neither here nor there.
Vojtech: I think the rule in 5.11 is quite convenient.
Norm: And allowing it is an editorial change were forbidding it would be a technical changes, because 5.11 says it's currently allowed.
Proposal: Add an erratum to say that the output port of a compound step can be directly connected to any of the compound step's readable ports.
Accepted.
<scribe> ACTION: Norm to write an erratum to allow he output port of a compound step can be directly connected to any of the compound step's readable ports. [recorded in]
Alex: Do we produce a new version with the errata merged in?
Norm: We can, but we don't have to.
Henry: It's polite to do it.
Paul: That would be a second edition.
Henry: I think it's fine to wait at least a year.
Norm: So do I.
... Happy holidays and happy new year to one and all! Henry Alex Paul Vojtech Agenda: Found Date: 16 Dec 2010 Guessing minutes URL: People with action items: norm WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
|
http://www.w3.org/2010/12/16-xproc-minutes.html
|
CC-MAIN-2014-35
|
refinedweb
| 672
| 75.91
|
Current software development emphasizes the delivery of value and quality to customers as quickly as possible. However, in order to accomplish this, quality must be ensured on many levels. One of these levels comes in the form of visual testing, or how the look and feel of the application is, taking into consideration factors such as multiple browsers, devices, and operating systems. For many years, this process has been done manually, something reflected in the fact that most popular web automation testing tools focus on the end users functional behavior. It is quite difficult, though, for average tools to determine the position of UI elements and whether they diverge from expected parameters. Visual bugs are everywhere, and QA is constantly catching errors related to the way an application should look visually, even if we are not actually on the hunt for them. For many years, manual testing has been the approach to discover such differences or errors.
Today, I want to introduce automated visual testing as a way to replace manual QA efforts. A few tools have surfaced in recent years to accomplish such a feat, and I’ll be showing an example of one and how to integrate it with your existing automated test suite, add a new layer of tests, and increase your quality even more in the process.
For the following example, I will be using a third-party tool called Applitools. Applitools will provide us with an SDK to enhance our existing business-oriented validations by adding an extra layer of visual validations and will also leverage an easy-to-use dashboard for image comparison against a baseline; more on that in a minute. Let’s begin by setting up an account with Applitools by following these steps:
1. First, create an Applitools account via the following URL:
2. Once the account is created, navigate to the top-right menu bar, open it up, and click on “My API key.” There should be a unique API key generated that you can use for your AI-powered visual checks. Store this somewhere safe and secure as we will use it later. A copy of this key should have been sent to your email address upon registration, as well.
Once we have set up the account, we can dive into adding visual validations. For this specific example, I will be creating a very simple Java project with Gradle as a build tool/dependency manager and Selenium to execute a few web-related tests:
Project Structure
Gradle Dependencies
dependencies { testCompile group: 'org.testng', name: 'testng', version: '6.14.3' compile group: 'org.seleniumhq.selenium', name: 'selenium-java', version: '3.141.59' compile group: 'io.github.bonigarcia', name: 'webdrivermanager', version: '3.4.0' }
Initial Test
public class LoginTests extends BaseTests { LoginPageObject loginPageObject; Properties props; public LoginTests() { loginPageObject = new LoginPageObject(); } @BeforeMethod() public void loginSetup() throws Throwable { driver.manage().window().maximize(); props = System.getProperties(); props.load(new FileInputStream("resources/test.properties")); driver.get(""); } ()); } catch (Exception ex) { Assert.fail("Test execution failed with following message " + ex.getMessage()); } } @AfterMethod public void loginCleanup() { driver.quit(); } }
As you can see, we have a very simple test that opens up Amazon, navigates to the login page, uses some credentials to sign in, and then finishes with a functional validation. This test works and is valid. However, our assertions could be way more powerful. Right now, we are simply verifying if the address associated with the account is displayed as part of the dashboard info. That is fine, but are we sure that the element or elements present are positioned correctly? What about our window? Are there any elements out of place? With visual testing, we can add checks to answer these questions. Let’s continue with integrating the Applitools SDK into our existing code. This only requires a couple of steps:
compile group: 'com.applitools', name: 'eyes-selenium-java3', version: '3.150.1'
Eyes eyes; private void setupEyes() { eyes = new Eyes(); eyes.setApiKey(System.getProperty("applitoolsKey")); } private void validateWindow() { eyes.open(driver, "Visual Test", "Visual Test Amazon"); eyes.checkWindow(); eyes.close(); }
This is where the magic happens. “Eyes” is essentially an AI-driven mechanism that allows you to tell your tests what visual elements to be on the lookout for, giving you a way to know if there are any UI or visual deviations from the expected layout. “Eyes” must have a unique API key set that will allow future validations to be compared in unique personal or company dashboards with the Applitools site. We saw this key when setting up our Applitools account. A very crucial advantage of having a unique key is that it is completely possible to add different keys per environment, so we could potentially have a key for a production site and alternatively execute the same validations against a sandbox account for lower environments. We also pass along the web driver, a unique app name and unique test name as parameters.
Now, let’s add our first visual check. Let’s call it the checkWindow function, which will check the status of the screen or window after our existing functional check and subsequently execute the test:
()); validateWindow(); } catch (Exception ex) { Assert.fail("Test execution failed with following message " + ex.getMessage()); } }
After we execute our test for the first time, we create a baseline image. This baseline image will be used to visually compare it in future executions. Since this is the first execution, the test will pass. Now, here is where the fun begins. If we execute our same test for a second time, we are now comparing our test against our first defined image. We can see the results of this comparison by simply navigating to our dashboard:
As you can see, the test has failed in TestNG. If we look closely at the console error, it states: “Test ‘Visual Test Amazon’ of ‘Visual Test’ detected differences! See details at:…”. But why has this test failed? Amazon is a page that has a lot of dynamic content with visual changes. In this case, the image in the carousel on the upper portion of the dashboard was different when checking the window.
Alright, we have done it! We have been able to visually validate deviations from one base image to another. But is that really all we can do? The answer is no: this barely scratches the surface of it. Was this the outcome we were expecting? Was this really a failure or just a dynamic content difference? What about colors; should we ignore color changes? Thankfully, we have match levels, or ways to set a “visual threshold” in which we automatically ignore content changes or colors in order to focus, for example, on layout instead. Let’s take a quick peek at this:
private void validateWindow() { eyes.open(driver, "Visual Test", "Visual Test Amazon"); eyes.setMatchLevel(MatchLevel.LAYOUT); eyes.checkWindow(); eyes.close(); }
As seen above, we set our match level on a “Layout” level (“Strict” is set by default), and now the test has passed. Even though there are content differences, the layout remains the same; there is no need for the AI to report any visual errors. Further exploration is also available via the dashboard by comparing, zooming in and zooming out, and passing and failing test cases manually. The dashboard provides a great and centralized place to view potential differences.
In addition to checking the screen as a whole and comparing, we also have other types of UI-related verifications. For example, we can easily verify an individual element or a specific frame, instead of the whole screen, and integrate with other testing frameworks, not just Selenium. We are also able to use several continuous deployment tools. For more information, there is in-depth documentation available.
Without going into further detail—and in order to leave some room for further exploration—I want to wrap up by saying visual checks are a great addition to any and all existing web-automated tests. Adding AI-driven visual checks to any existing automated framework provides an extra layer of previously unexplored verifications that were not possible before. It reduces the risk of errors by adding an automated tool and replaces manual verifications, which have been the traditional method for visual validation until now.
|
https://gorillalogic.com/blog/reducing-ui-bugs-with-automated-visual-testing/
|
CC-MAIN-2021-21
|
refinedweb
| 1,372
| 55.74
|
@nativescript/webpack rewrite
The rewrite allows us to simplify things, and introduce some breaking changes. Listing them here, so we can keep track of them - will be in the merge commit, and the release notes once we are ready.
BREAKING CHANGES:
package.jsonmain should now use a relative path to the package.json instead of the app directory
For example (given we have a
srcdirectory where our app is):
"main": "app.js"becomes
"main": "src/app.js"OR
"main": "src/app.ts"(whether using JS or TS)
This simplifies things, and will allow ctrl/cmd + clicking on the filename in some editors.
postinstallscripts have been removed.
The configuration will not need to change in the user projects between updates.
For existing projects we will provide an easy upgrade path, through
ns migrateand a binary in the package.
For new projects
ns createshould create the config file by invoking a binary in the package.
removed resolutions for short imports - use full imports instead.
For example:
import http from 'http' // becomes import { http } from '@nativescript/core'
|
https://www.npmjs.com/package/@nativescript/webpack
|
CC-MAIN-2022-05
|
refinedweb
| 174
| 59.8
|
Learning Next.js
- Part 1: Reviewing React
- Part 2: Pages
- Part 3: Links and Head
- Part 4: Dynamic Routing
- Part 5: Serverless API
Next.js is a JavaScript framework based on React for rapidly developing web sites and applications.
Links
In the previous section, the Pages functionality of Next.js was introduced. Any named file in the “/pages” folder will be interpreted as a separate file. However, what was not discussed was how to navigate between them.
To help with creating links, Next.js provides a class component called <Link>.
import Link from "next/link"; export default () => { return( <div> <p>This is the index page!</p> <Link href="/about"> <a>About</a> </Link> </div> ); }
The <Link> component has one necessary attribute, href. In order to know the navigational location, this must be set. Within the <Link> component, a child element, <a>, should be used with the name of the link itself.
Head
Because Next.js handles the built-in React rendering, accessing the normal <head> element in HTML is not possible. However, it provides a component, <Head>, that can contain child elements like <title> and others that will be added during runtime.
Because Next.js provides this component, it can also be included anywhere within the existing component elements in a render() function. Its children will always be added or re-calculated during a call to render().
import Link from "next/link"; import Head from "next/head"; export default () => { return( <div> <Head> <title>Index page!</title> <meta name="author" content="Dan Cox" /> </Head> <p>This is the index page!</p> <Link href="/about"> <a>About</a> </Link> </div> ); }
|
https://videlais.com/2019/12/25/learning-next-js-part-3-links-and-head/
|
CC-MAIN-2021-04
|
refinedweb
| 268
| 59.09
|
While working on a recent application in C#, I ran into a situation I hadn’t hit before and worked out a solution I wanted to pass along for others that might come up against this same scenario. I’m not sure if it’s the “preferred” way to do it, but it sure worked for me and will definitely work for you. In C#, comboboxes for Windows Forms only allows you to set the display text for items in a combobox. However, in the situation I ran into, I needed a way to control the display value as well as the hidden values of a combobox. For example, in HTML, you can create a select box that includes options to choose from which allow you to control the value that gets displayed to the user as well as the value that is actually passed from the user interface to the server. You can do that like so:
<select name=”cmbComboBox”>
<option value=”1″>One</option>
<option value=”2″>Two</option>
<option value=”3″>Three</option>
So, I wanted to do the same thing using C# and a Windows Forms application. The solution I came up with was to create a new object that allowed me to control what the display and hidden values were. By doing this, it also provided me the ability to use existing objects I already had lying around in my app such as domain objects that modeled tables in my database. Since the text attribute is what is expected to be set for each item in a combobox, the only requirement my new object needed was an override on the “ToString” method. Here is what my combobox item class looked like:
public class ComboBoxItem { private int _id; private string _display; public ComboBoxItem(int id, string display) { this._id = id; this._display = display; } public string Text { get { return this._display; } set { this._display = value; } } public int ID { get { return this._id; } set { this._id = value; } } public override string ToString() { return this._display; } }
Now, I can create objects of type ComboBoxItem and add them to the items list of my combobox like this:
ComboBoxItem one = new ComboBoxItem(1, “One”);
ComboBoxItem two = new ComboBoxItem(2, “Two”);
ComboBoxItem three = new ComboBoxItem(3, “Three”);
cmbComboBox.Items.Add(one);
cmbComboBox.Items.Add(two);
cmbComboBox.Items.Add(three);
To retrieve the objects from my combobox, all I had to do was get the selected item and cast it to type ComboBoxItem like so:
ComboBoxItem selected = (ComboBoxItem)cmbComboBox.SelectedItem;
With that, I could use the standard getters in my combobox item to get anything inside it. If I had other fields inside my combobox class and wanted those to be displayed in the combobox list, I could concatenate those fields in the “ToString” method.
PayPal will open in a new tab.
|
http://www.prodigyproductionsllc.com/articles/programming/bind-objects-to-combo-boxes-in-c/
|
CC-MAIN-2017-04
|
refinedweb
| 468
| 59.03
|
Hi,
I have one big while loop in my script (that calls other modules), and I want to throw one big try/except around that while loop. I'd like to catch the exception and write the error along with the location of the error to file on the Pyboard (probably in some sort of .txt file in flash or on sd). I am running the Pyboard off of external power and will not be able to monitor the REPL prompt, so I need it to write to file so that I can access it later. Are there any suggestions? Right now, I have something like:
try:
#all of the main code
except Exception as e:
writeErrorMessage(e,"Error_Message.txt")
def writeErrorMessage(self,error,fname):
f_handle = open(fname, 'w')
f_handle.write(str(error))
f_handle.close()
Writing exception to file
General discussions and questions abound development of code with MicroPython that is not hardware specific.
Target audience: MicroPython Users.
Target audience: MicroPython Users.
Post Reply
3 posts • Page 1 of 1
Re: Writing exception to file
You can use sys.print_exception: ... _exception
Here's an example: ... exc.py#L11
Here's an example: ... exc.py#L11
Re: Writing exception to file
Thanks dhylands, that worked!
Post Reply
3 posts • Page 1 of 1
|
https://forum.micropython.org/viewtopic.php?f=2&t=8620&p=48728
|
CC-MAIN-2021-04
|
refinedweb
| 212
| 66.94
|
Question: If your income after taxes is 28 000 per year and
If your income after taxes is $28,000 per year, and you expect to pay about $100 per month for utilities, about how much can you afford to spend on monthly rental payments?
View Solution:
View Solution:
Answer to relevant QuestionsGiven an affordable monthly mortgage payment of $650 and a mortgage interest rate of 9 percent on a 30-year loan, what is the size of the affordable mortgage? Given this affordable mortgage, and a 10 percent down payment, ...Suppose you expect your income to increase significantly over the next few years. You would like to purchase a home that more closely fits your future income status rather than your present circumstances, thereby avoiding ...What is the distinguishing feature of a convertible home mortgage? Very briefly highlight the seven federal laws (discussed in this chapter) regulating the securities industry. 1. Assuming the Delaneys would pay 12 percent a year on the broker’s loan associated with the margin account, determine their net annual return (expressed as a percentage of the amount they invest) if their stocks paid a ...
Post your question
|
http://www.solutioninn.com/if-your-income-after-taxes-is-28000-per-year-and
|
CC-MAIN-2017-34
|
refinedweb
| 193
| 58.52
|
David Abrahams wrote: >>The python interpreter then figures out that it should create the so >>defined wrapper object and wrap it around my existing foo instance. > > > It's not really clear to me just what you mean. What I understood well from your tutorial was how to define python type objects (wrappers around C++ classes) and how to inject them into a given module. What I was missing was how to instantiate python objects of these types. Once I'v seen it it appears quite 'natural' and it was in fact such a simple construct that I was expecting. I guess a simple example near the 'embedding' section in the tutorial would clarify things a lot. What I'm doing right now is this: class Foo {/*...*/}; python::class_<Foo> foo_type("Foo"); foo_type.add_property("value", &Foo::get_value, &Foo::set_value); python::object foo_wrapper = foo_type(); main_namespace["object"] = foo_wrapper; // globale object, used later... FILE *fp = fopen("script.py", "r"); python::handle<> result2(PyRun_File(fp, "script.py", Py_file_input, main_namespace.ptr(), main_namespace.ptr())); python::object callable = python::extract<python::object>(main_namespace["set"]); callable(boost::reference_wrapper<Foo>(foo)); What I was missing here (and still don't really understand) is how the definition of 'foo_type' above somewhere registers itself such that the call in the last line will find it *at runtime*. That's what I was referring to with 'the python interpreter'. Would 'python runtime' be a better term ? > Hmm. Well, yeah, a description of the conversion mechanisms would be > good to have. Absolutely, I guess that was the piece of the puzzle that I was missing. Best regards, Stefan
|
https://mail.python.org/pipermail/cplusplus-sig/2003-May/003790.html
|
CC-MAIN-2016-36
|
refinedweb
| 264
| 58.38
|
Something very strange appears to be happening with Time.deltaTime when used in the Update loop in C# Scripts. Time.fixedDeltaTime does not have this problem, and I have not tested it in Javascript.
Essentially, it appears that Time.deltaTime is not correctly calculating or that Update is not being called consistently with each frame because when used in any form (including it's simplest) such as:
using UnityEngine; using System.Collections;
public class example : MonoBehaviour { void Update() { float translation = Time.deltaTime * 10; transform.Translate(0, 0, translation); } }
the script results in inconsistent movement of objects. It was a fellow team member who first noted the bug when a few bullets which had been slowed down for testing started to move more quickly when unrelated character controllers received input from an x-box controller. I soon discovered that although it is most obvious in this context, it appears to be distorting ALL uses of Time.deltaTime. While I was able to remove it from most of the code and convert most movement systems to be based on physics, there are many other places it is used where timing is vital.
Does anyone know what could be causing this problem, or if it is a bug within Unity itself, and what, if anything, could be done in it's place/to fix it?
My biggest concern is that the character controller Move function normally takes in a value multiplied by Time.deltaTime and I can see this becoming a major problem.
Thanks.
For fun, I'd make public float T=0; and jam T+=Time.deltaTime; into update. $$anonymous$$aybe do the same thing with fixedTime or just Time.time as a control. Check if it really is moving wrong.
public float T=0;
T+=Time.deltaTime;
check updated answer :)
Do you have a s$$anonymous$$dy frame rate? Since Time.deltaTime measure the frame before, I could see that if your FPS is highly inconsistent that you could possibly have some lag issues-this is theoretical though, I've never had any problems with Time.deltaTime.
This could explain it Peter. Thank you for clarifying how the fps is calculated.
Same here, I wonder why no one found this problem. If I manipulate the object position with Time.deltaTime, the movement is not always consistent, the movement could be faster or slower in Unity Editor, depends on what you selected in Hierarchy tab, and standalone full screen seems slower than webplayer, pretty weird.
This problem not so apparent maybe most Unity games only use the physics engine.
Answer by Eric5h5
·
Apr 19, 2011 at 03:25 AM
Time.deltaTime is simply the time since the last frame, so there's not really anything that could go wrong with it. It always works, or at least it's never not worked in the 4 years I've been using Unity. The example code you posted is fine, so you have issues elsewhere. You could be changing Time.timeScale or something.
I spoke with my father (he has been coding since punch cards) and apparently it is possible to get time drift with frame checks because occasionally the call to the clock will fail when too many things attempt to check it at the same time. Without knowing exactly how Unity finds Time.deltaTime it is hard to know exactly what is happening. Regardless, the drift is very small and really can only be seen when I slow everything down, so I'm not going to worry about it. I would like to learn more about how Unity calculates the value, but I will live without it xD. Cheers!
You sir, are my hero!!! You have solved something that has been perplexing me for a few hours now, I forgot that my pause menu simply used time.timeScale, and there's a quit level button there... making me scratch my head and wonder why my timer stopped counting before it called for a timecheck, making it impossible to regain any energy xD! Thank you so much for saving me a few more hairs ^^,
Answer by AngryOldMan
·
Apr 18, 2011 at 04:58 PM
change Time.smoothDeltaTime to get a less jerky reaction. Time.deltaTime is the time between frames so it's always going to be inconsistent even sometimes when not moving.
EDIT
Ah i see I must have misunderstood, my bad. Instead of using tranform.translate use teansform.position and it will appear to be jerky rather than smoothly moving
Yes, it should be inconsistent to correctly calculate the amount of time passed,but that isn't what my problem is.
The problem is it is not calculating this correctly, resulting in inconsistent movement per second.
The entire point of Time.deltaTime is to make movement and other calculations work independently of frame rate, but frame rate continues to affect my calculations (or something is). In other words... I "WANT" it to be jerky and it's not happening lol if that makes sense.
If anything it's as though it is over compensating for the frame rate.
Answer by Gizmoa
·
Apr 20, 2011 at 04:44 PM
Thanks everyone for your help with this question and the clarifications on how Unity calculates Time.deltaTime.
Please don't post comments as answers.
Answer by Joe 24
·
May 18, 2011 at 05:05 AM
Hi everyone Did you get a solution to this? I'm basically calculating the position of a character around a circumference by adding to the angle. I want to multiply the angle by deltaTime so it's consistent with the frame rate, but when I do, the animation starts jerking out, basically the character will move forward in the path but it would irregularly jump back and forward a pixel or two. This is the code I'm using:
angle += increment * Time.deltaTime; angle = angle - (Mathf.Floor(angle/360))*360; Debug.Log(angle); float nZ = Mathf.Sin( angle*Mathf.Deg2Rad) * radious; float nX = Mathf.Cos(Mathf.Deg2Rad*angle) * radious; movement = new Vector3(nX-character.transform.position.x, 0, nZ-character.transform.position.z);
character.Move(movement);
character.Move(movement);
Answer by flim
·
Apr 05, 2012 at 03:38 AM
The Time.deltaTime return different result if I didn't select anything in Hierarchy tab, or if the "Maximize on Play" is on.
Also the publish target has different result, the rotation works for web player, but not work in standalone full screen.
Very strange :-|
I found the following code would have affect by the Unity editor:
The value movementX is higher when there are something selected in Hierarchy tab, and smallest when run in "Maximize on Play" is selected.
Code:
void Update() {
mouseX = Input.mousePosition.x;
mouseY = Input.mousePosition.y;
cameraScreenXY = Camera.main.WorldToScreenPoint (transform.position);
deltaX = mouseX - cameraScreenXY.x;
deltaY = mouseY - cameraScreenXY.y;
oldX = transform.localPosition.x;
oldY = transform.localPosition.y;
transform.localPosition = new Vector3 (
transform.localPosition.x + (deltaX * Time.deltaTime * moveSpeed / screenWidth),
transform.localPosition.y + (deltaY * Time.deltaTime * moveSpeed / screenHeight),
transform.localPosition.z);
movementX = (transform.localPosition.x - oldX);
movementY = (transform.localPosition.y - oldY);
}
I found that the Time.deltaTime cause the problem, for instance, keep the the Game window resolution unchange, if I select and unselect object in Hierarchy tab I will get different maximum movementX value.
Is the Time.deltaTime.
Single Step (pause, resume), or externally clock game loop
0
Answers
Add 1 Per Second to an Int
1
Answer
Movement using Time.deltaTime not working on Fast mac
3
Answers
Execute code every x seconds with Update()
4
Answers
Time.deltaTime not consistent over time
1
Answer
|
https://answers.unity.com/questions/59115/timedeltatime-not-working-correctly.html
|
CC-MAIN-2021-10
|
refinedweb
| 1,263
| 67.15
|
Flask-CouchDB makes it easy to use the powerful CouchDB database with Flask.
First, you need CouchDB. If you’re on Linux, your distribution’s package manager probably has a CouchDB package to install. (On Debian, Ubuntu, and Fedora, the package is simply called couchdb. On other distros, search your distribution’s repositories.) Windows and Mac have some unofficial installers available, so check CouchDB: The Definitive Guide (see the Additional Reference section) for details. On any other environment, you will probably need to build from source.
Once you have the actual CouchDB database installed, you can install Flask-CouchDB. If you have pip (recommended),
$ pip install Flask-CouchDB
On the other hand, if you can only use easy_install, use
$ easy_install Flask-CouchDB
Both of these will automatically install the couchdb-python library Flask-CouchDB needs to work if the proper version is not already installed.
To get started, create an instance of the CouchDBManager class. This is used to set up the connection every request and ensure that your database exists, design documents are synced, and the like. Then, you call its setup() method with the app to register the necessary handlers.
manager = CouchDBManager() # ...add document types and view definitions... manager.setup(app)
The database to connect with is determined by the configuration. The COUCHDB_SERVER application config value indicates the actual server to connect to (for example,), and COUCHDB_DATABASE indicates the database to use on the server (for example, webapp).
By default, the database will be checked to see if it exists - and views will be synchronized to their design documents - on every request. However, this can (and should) be changed - see Database Sync Behavior for more details.
Since the manager does not actually do anything until it is set up, it is safe (and useful) for it to be created in advance, separately from the application it is used with.
On every request, the database is available as g.couch. You will, of course, want to check couchdb-python’s documentation of the couchdb.client.Database class for more detailed instructions, but some of the most useful methods are:
# creating document = dict(title="Hello", content="Hello, world!") g.couch[some_id] = document # retrieving document = g.couch[some_id] # raises error if it doesn't exist document = g.couch.get(some_id) # returns None if it doesn't exist # updating g.couch.save(document)
If you use this style of DB manipulation a lot, it might be useful to create your own LocalProxy, as some people (myself included) find the g. prefix annoying or unelegant.
couch = LocalProxy(lambda: g.couch)
You can then use couch just like you would use g.couch.
If you register views with the CouchDBManager, they will be synchronized to the database, so you can always be sure they can be called properly. They are created with the ViewDefinition class.
View functions can have two parts - a map function and a reduce function. The “map” function takes documents and emits any number of rows. Each row has a key, a value, and the document ID that generated it. The “reduce” function takes a list of keys, a list of values, and a parameter describing whether it is in “rereduce” mode. It should return a value that reduces all the values down into one single value. For maximum portability, most view functions are written in JavaScript, though more view servers - including a Python one - can be installed on the server.
The ViewDefinition class works like this:
active_users_view = ViewDefinition('users', 'active', '''\ function (doc) { if (doc.active) { emit(doc.username, doc) }; }''')
'users' is the design document this view is a part of, and 'active' is the name of the view. This particular view only has a map function. If you had a reduce function, you would pass it after the map function:
tag_counts_view = ViewDefinition('blog', 'tag_counts', '''\ function (doc) { doc.tags.forEach(function (tag) { emit(tag, 1); }); }''', '''\ function (keys, values, rereduce) { return sum(values); }''', group=True)
The group=True is a default option. You can pass it when calling the view, but since it causes only one row to be created for each unique key value, it makes sense as the default for our view.
To get the results of a view, you can call its definition. Within a request, it will automatically use g.couch, but you can still pass in a value explicitly. They return a couchdb.client.ViewResults object, which will actually fetch the results once it is iterated over. You can also use getitem and getslice notation to apply a range of keys. For example:
active_users_view() # rows for every active user active_users_view['a':'b'] # rows for every user between a and b tag_count() # rows for every tag tag_count['flask'] # one row for just the 'flask' tag
To make sure that you can call the views, though, you need to add them to the CouchDBManager with the add_viewdef method.
manager.add_viewdef((active_users_view, tag_count_view))
This does not cover writing views in detail. A good reference for writing views is the Introduction to CouchDB views page on the CouchDB wiki.
With the Document class, you can map raw JSON objects to Python objects, which can make it easier to work with your data. You create a document class in a similar manner to ORMs such as Django, Elixir, and SQLObject.
class BlogPost(Document): doc_type = 'blogpost' title = TextField() content = TextField() author = TextField() created = DateTimeField(default=datetime.datetime.now) tags = ListField(TextField()) comments_allowed = BooleanField(default=True)
You can then create and edit documents just like you would a plain old object, and then save them back to the database with the store method.
post = BlogPost(title='Hello', content='Hello, world!', author='Steve') post.id = uuid.uuid4().hex post.store()
To retrieve a document, use the load method. It will return None if the document with the given ID could not be found.
post = BlogPost.load(some_id) if post is None: abort(404) return render_template(post=post)
If a doc_type attribute is set on the class, all documents created with that class will have their doc_type field set to its value. You can use this to tell different document types apart in view functions (see Adding Views for examples).
One advantage of using JSON objects is that you can include complex data structures right in your document classes. For example, the tags field in the example above uses a ListField:
tags = ListField(TextField())
This lets you have a list of tags, as strings. You can also use DictField. If you provide a mapping to the dict field (probably using the Mapping.build method), it lets you have another, nested data structure, for example:
author = DictField(Mapping.build( name=TextField(), email=TextField() ))
And you can use it just like it’s a nested document:
On the other hand, if you use it with no mapping, it’s just a plain old dict:
metadata = DictField()
You can combine the two fields, as well. For example, if you wanted to include comments on the post:
The ViewField class can be used to add views to your document classes. You create it just like you do a normal ViewDefinition, except you attach it to the class and you don’t have to give it a name, just a design document (it will automatically take the name of its attribute):
tagged = ViewField('blog', '''\ function (doc) { if (doc.doc_type == 'blogpost') { doc.tags.forEach(function (tag) { emit(tag, doc); }); }; }''')
When you access it, either from the class or from an instance, it will return a ViewDefinition, which you can call like normal. The results will automatically be wrapped in the document class.
BlogPost.tagged() # a row for every tag on every document BlogPost.tagged['flask'] # a row for every document tagged 'flask'
If the value of your view is not a document (for example, in most reduce views), you can pass Row as the wrapper. A Row has attributes for the key, value, and id of a row.
tag_counts = ViewDefinition('blog', '''\ function (doc) { if (doc.doc_type == 'blogpost') { doc.tags.forEach(function (tag) { emit(tag, 1); }); }; }''', '''\ function (keys, values, rereduce) { return sum(values); }''', wrapper=Row, group=True)
With that view, for example, you could use:
# print all tag counts for row in tag_counts(): print '%d posts tagged %s' % (row.value, row.key) # print a single tag's count row = tag_counts[some_tag].rows[0] print '%d posts tagged %s' % (row.value, row.key)
To schedule all of the views on a document class for synchronization, use the CouchDBManager.add_document method. All the views will be added when the database syncs.
manager.add_document(BlogPost)
In any Web application with large datasets, you are going to want to paginate your results. The paginate function lets you do this.
The particular style of pagination used is known as linked-list pagination. This means that instead of a page number, the page is indicated by a reference to a particular item (the first one on the page). The advantages of linked-list paging include:
Unfortunately, there are also drawbacks:
In this case, however, the efficiency issue is the major deciding factor.
To paginate, you need a ViewResults instance, like the one you would get from calling or slicing a ViewDefinition or ViewField. Then, you call paginate with the view results, the number of items per page, and the start value given for that page (if there is one).
page = paginate(BlogPost.tagged[tag], 10, request.args.get('start'))
It will return a Page instance. That contains the items, as well as the start values of the next and previous pages (if there are any). As noted in the above example, the best practice is to put the start reference in the query string. You can display the page in the template with something like:
<ul> {% for item in page.items %} display item... {% endfor %} </ul> {% if page.prev %}<a href="{{ url_for('display', start=page.prev) }}">Previous</a>{% endif %} {% if page.next %}<a href="{{ url_for('display', start=page.next) }}">Next</a>{% endif %}
taking advantage of the fact that url_for converts unknown parameters into query string arguments.
If you really need numbered paging using limit/skip in your application, it’s easy enough to implement. (For example, browsing through the posts in a forum thread would get tiresome if you had to click through five next links just to reach the last post.) A good implementation of numbered paging is in the Flask-SQLAlchemy extension (specifically, the BaseQuery.paginate method and Pagination object), so you can look there for some ideas as to the mechanics. The mechanics of using the limit and skip options are described on the CouchDB wiki.
If you choose to go this route, though:
By default, the database is “synced” by a callback on every request. During the sync:
The default behavior is intended to ensure a minimum of effort to get up and running correctly. However, it is very inefficient, as a number of possibly unnecessary HTTP requests may be made during the sync. As such, you can turn automatic syncing off.
If you don’t want to disable it at the code level, it can be disabled at the configuration level. After you have run a single request, or synced manually, you can set the DISABLE_AUTO_SYNCING config option to True. It will prevent the database from syncing on every request, even if it is enabled in the code.
A more prudent method is to pass the auto_sync=False option to the CouchDBManager constructor. This will prevent per-request syncing even if it is not disabled in the config. Then, you can manually call the sync() method (with an app) and it will sync at that time. (You have to have set up the app before then, so it’s best to put this either in an app factory function or the server script - somewhere you can guarantee the app has already been configured.) For example:
app = Flask(__name__) # ...configure the app... manager.setup(app) manager.sync(app)
This documentation is automatically generated from the sourcecode. This covers the entire public API (i.e. everything that can be star-imported from flaskext.couchdb). Some of these have been directly imported from the original couchdb-python module.
This manages connecting to the database every request and synchronizing the view definitions to it.
This adds all the view definitions from a document class so they will be added to the database when it is synced.
This adds standalone view definitions (it should be a ViewDefinition instance or list thereof) to the manager. It will be added to the design document when it it is synced.
This iterates through all the view definitions registered generally and the ones on specific document classes.
This connects to the database for the given app. It presupposes that the database has already been synced, and as such an error will be raised if the database does not exist.
This adds a callback to run when the database is synced. The callbacks are passed the live database (so they should use that instead of relying on the thread locals), and can do pretty much whatever they want. The design documents have already been synchronized. Callbacks are called in the order they are added, but you shouldn’t rely on that.
If you can reliably detect whether it is necessary, this may be a good place to add default data. However, the callbacks could theoretically be run on every request, so it is a bad idea to insert the default data every time.
This method sets up the request/response handlers needed to connect to the database on every request.
This syncs the database for the given app. It will first make sure the database exists, then synchronize all the views and run all the callbacks with the connected database.
It will run any callbacks registered with on_sync, and when the views are being synchronized, if a method called update_design_doc exists on the manager, it will be called before every design document is updated.
Retrieve and return the design document corresponding to this view definition from the given database.
Ensure that the view stored in the database matches the view defined by this instance.
Ensure that the views stored in the database that correspond to a given list of ViewDefinition instances match the code defined in those instances.
This function might update more than one design document. This is done using the CouchDB bulk update feature to ensure atomicity of the operation.
Representation of a row as returned by database views.
This class can be used to represent a single “type” of document. You can use this to more conveniently represent a JSON structure as a Python object in the style of an object-relational mapper.
You populate a class with instances of Field for all the attributes you want to use on the class. In addition, if you set the doc_type attribute on the class, every document will have a doc_type field automatically attached to it with that value. That way, you can tell different document types apart in views.
The document ID
Return the fields as a list of (name, value) tuples.
This method is provided to enable easy conversion to native dictionary objects, for example to allow use of mapping.Document instances with client.Database.update.
>>> class Post(Document): ... title = TextField() ... author = TextField() >>> post = Post(id='foo-bar', title='Foo bar', author='Joe') >>> sorted(post.items()) [('_id', 'foo-bar'), ('author', u'Joe'), ('title', u'Foo bar')]
This is used to retrieve a specific document from the database. If a database is not given, the thread-local database (g.couch) is used.
For compatibility with code used to the parameter ordering used in the original CouchDB library, the parameters can be given in reverse order.
Execute a CouchDB temporary.
The document revision.
This saves the document to the database. If a database is not given, the thread-local database (g.couch) is used.
Execute a CouchDB named.
Basic unit for mapping a piece of data between Python and JSON.
Instances of this class can be added to subclasses of Document to describe the mapping of a document.
This implements linked-list pagination. You pass in the view to use, the number of items per page, and the JSON-encoded start value for the page, and it will return a Page instance.
Since this is “linked-list” style pagination, it only allows direct navigation using next and previous links. However, it is also very fast and efficient.
You should probably use the start values as a query parameter (e.g. ?start=whatever).
This represents a single page of items. They are created by the paginate function.
A list of the actual items returned from the view.
The start value for the next page, if there is one. If not, this is None. It is JSON-encoded, but not URL-encoded.
The start value for the previous page, if there is one. If not, this is None.
Mapping field for string values.
Mapping field for integer values.
Mapping field for float values.
Mapping field for long integer values.
Mapping field for decimal values.
Mapping field for boolean values.
Mapping field for storing date/time values.
>>> field = DateTimeField() >>> field._to_python('2007-04-01T15:30:00Z') datetime.datetime(2007, 4, 1, 15, 30) >>> field._to_json(datetime(2007, 4, 1, 15, 30, 0, 9876)) '2007-04-01T15:30:00Z' >>> field._to_json(date(2007, 4, 1)) '2007-04-01T00:00:00Z'
Mapping field for storing dates.
>>> field = DateField() >>> field._to_python('2007-04-01') datetime.date(2007, 4, 1) >>> field._to_json(date(2007, 4, 1)) '2007-04-01' >>> field._to_json(datetime(2007, 4, 1, 15, 30)) '2007-04-01'
Mapping field for storing times.
>>> field = TimeField() >>> field._to_python('15:30:00') datetime.time(15, 30) >>> field._to_json(time(15, 30)) '15:30:00' >>> field._to_json(datetime(2007, 4, 1, 15, 30)) '15:30:00'
Field type for sequences of other fields.
>>> from couchdb import Server >>> server = Server() >>> db = server.create('python-tests')
>>> class Post(Document): ... title = TextField() ... content = TextField() ... pubdate = DateTimeField(default=datetime.now) ... comments = ListField(DictField(Mapping.build( ... author = TextField(), ... content = TextField(), ... time = DateTimeField() ... )))
>>> post = Post(>> comment['content'] 'Bla bla' >>> comment['time'] '...T...Z'
>>> del server['python-tests']
Field type for nested dictionaries.
>>> from couchdb import Server >>> server = Server() >>> db = server.create('python-tests')
>>> class Post(Document): ... title = TextField() ... content = TextField() ... author = DictField(Mapping.build( ... name = TextField(), ... email = TextField() ... )) ... extra = DictField()
>>> post = Post( ...>> post.author.email u'john@doe.com' >>> post.extra {'foo': 'bar'}
>>> del server['python-tests']
Backwards Compatibility: Nothing introduced in this release breaks backwards compatibility in itself. However, if you add a doc_type attribute to your class and use it in your views, it won’t update your existing data to match. You will have to add the doc_type field to all the documents already in your database, either by hand or using a script, so they will still show up in your view results.
|
https://pythonhosted.org/Flask-CouchDB/
|
CC-MAIN-2018-17
|
refinedweb
| 3,139
| 65.83
|
Using Memory-Mapped Files in .NET 4.0
Introduction
Assume you have the need to manipulate multi-gigabyte files and read and write data to them. One option would be to access the file using a sequential stream, which is fine if you need to access the file from the beginning to the end. However, things get more problematic when you need random access. Seeking the stream is naturally a solution, but unfortunately a slow one.
If you have background in Windows API development, then you might be aware of an old technique called memory-mapped files (sometimes abbreviated MMF). The idea of memory-mapped files or file mapping is to load a file into memory so that it appears as a continuous block in your application's address space. Then, reading and writing to the file is simply a matter of accessing the correct memory location. In fact, when the operating system loader fetches your application's EXE or DLL files to execute their code, file mapping is used behind the scenes.
Memory-mapped files and large files are often associated together in the minds of developers, but there's no practical limit to how large or small the files accessed through memory mapping can be. Although using memory mapping for large files make programming easier, you might observe even better performance when using smaller files, as they can fit entirely in the file system cache.
The information and the code listings in this article are based on the .NET 4.0 Beta 1 release, available since May 2009. As is the case with pre-release software, technical details, class names and available methods might change once the final RTM version of .NET becomes available. This is worth keeping in mind while studying or developing against any beta library.
The New Namespace and its Classes
For .NET 4.0 developers, the interesting classes that
work with memory-mapped files live in the new
System.IO.MemoryMappedFiles namespace.
Presently, this namespace contains four classes and several
enumerations to help you access and secure your file
mappings. The actual implementation is inside the assembly
System.Core.dll.
The most important class for the developer is the MemoryMappedFile class. This class allows you to create a memory-mapped object, from which you can in turn create a view accessor object. You can then use this accessor to manipulate directly the memory block mapped from the file. Manipulation can be done using the convenient Read and Write methods.
Note that since direct pointers are not considered a sound programming practice in the managed world, such an access object is needed to keep things tidy. In traditional Windows API development in native code, you would simply get a pointer to the beginning of your memory block.
That said, the process or acquiring a memory-mapped file and the necessary accessor object, you need to follow three simple steps. First, you need a file stream object that points to (an existing) file on disk. Secondly, you can create the mapping object from this file, and as a final step, you create the accessor object. Here is a code example in C#:
FileStream file = new FileStream( @"C:\Temp\MyFile.dat", FileMode.Open); MemoryMappedFile mmf = MemoryMappedFile.CreateFromFile(file); MemoryMappedViewAccessor accessor = mmf.CreateViewAccessor();
The code first opens a file with the
System.IO.FileStream class, and then passes the
stream object instance to the static
CreateFromFile method of the
MemoryMappedFile class. The third step is to
call the
CreateViewAccessor method of the
MemoryMappedFile class.
In the above code, the
CreateViewAccessor
method is called without any parameters. In this case, the
mapping begins from the start of the file (offset zero) and
ends at the last byte of the file. You can however easily
map in any portion of the file. For instance, if your file
would be one gigabyte in size, then you could map, say, a
view at the one megabyte mark with a view size of 10,000
bytes. This could be done with the following call:
MemoryMappedViewAccessor accessor = mmf.CreateViewAccessor(1024 * 1024, 10000);
Later on, you will see more advanced uses for these mapped views. But first, you need to learn about reading from the view.<<
|
https://www.developer.com/net/article.php/3828586/Using-Memory-Mapped-Files-in-NET-40.htm
|
CC-MAIN-2018-34
|
refinedweb
| 701
| 63.7
|
(){ .. .. } 2) a block of code can also be defined as synchronised with an specified object, to capture a lock control on. function updateTotal(){ .. .. Synchronised(this){ .. .. } } Thread Sates or Life cycle. Thread follwos a life cycle, which starts from the threation till it is destroyed. New State: This is the first state when thread object is created. Thread will be in this state untill the run method is not called of. Runnable: Once the run method is called, thread goes in the runnble state. While it is in a runnable state it is eligible to get process time to execute its part. Process though will keep disributing the time among all the active threads. Thread can also move to waiting state due to two reasons. 1 Timed waiting 2: Object waiting. Threads not in runnable states do not get process time. TimedWaiting: Tread is forced to wait for some period of time may be by using Thread.sleep method. It will automatically come back to runnable state once specifiled time is over. ObjectWaiting:. Live Example: package multithreading; import java.util.ArrayList; import java.util.Random; public class MultiThread_wait { PoolC pool; MultiThread_wait(){ pool = new PoolC(); pool.addItem(1); } class PoolC{ boolean isFull; PoolC(){ isFull = false; } synchronized void addItem(int temp){ System.out.println(”Came to check for adding..”); if(!isFull){ System.out.println(”adding”); isFull=true; notify(); }else{ try { wait(); } catch (InterruptedException e) { e.printStackTrace(); } } } synchronized void printItem(){ System.out.println(”Came to check for printing..”); if(isFull){ System.out.println(”printing”); isFull=false; notify(); }else{ try { wait(); } catch (InterruptedException e) { e.printStackTrace(); } } } } class supplier implements Runnable{ int i = 1; public void run() { while(true){ Random rd = new Random(); int temp = rd.nextInt(); pool.addItem(temp); try{ Thread.sleep(4000); }catch(Exception ex){ ex.printStackTrace(); } } } } class consumer implements Runnable{ int i =1; public void run() { System.out.println(”Running thread two ..”); while(true){ { pool.printItem(); try{ Thread.sleep(100); }catch(Exception ex){ ex.printStackTrace(); } } } } } public static void main(String[] args) { // TODO Auto-generated method stub MultiThread_wait mts = new MultiThread_wait(); supplier th1 = mts.new supplier(); Thread th11 = new Thread(th1); th11.start(); consumer th2 = mts.new consumer(); Thread th21 = new Thread(th2); th21.
Multithreading in Java...
If you're new to Java then you've come to the right place: Get started with the foundations of the Java...
Popular Resources
- White Paper
- White Paper
- White Paper
- White Paper
- White Paper
Featured Stories
Java synthesized sound ideas, repackaging them in a practical format that turned on a generation of...
If you're new to Java then you've come to the right place: Get started with the foundations of the Java...
As Java turns 20, Oracle looks to what keeps the programming language so vital.
Oracle's Java VP discusses J2EE, OpenJDK, security woes, and the long gap before Java 7.
Sponsored Links
|
http://www.javaworld.com/article/2075487/java-concurrency/multithreading-in-java.html
|
CC-MAIN-2015-22
|
refinedweb
| 469
| 60.72
|
Find Maximum Perimeter of a Triangle
Reading time: 20 minutes | Coding time: 5 minutes
Given an array of integers, find which three elements form a non-degenerate triangle such that:
- the triangle has maximum perimeter
- if there are two or more combinations with same value of maximum perimeter, output the one with the longest side. Output -1 if not possible
First of all, what’s a non degenerate triangle?
If a,b and c are the sides of the triangle, and if the following 3 conditions are true, then it is a non-degenerate triangle.
a+b>c a+c>b b+c>a
Phew! So many conditions! But honestly, this is really easy to solve.
From all the possible triplets, check for the given conditions and keep track of the maximum ones. At the end, output the one with the longest side. Yes. Correct.
The time complexity of this approach will be
O(n^3). We can surely do better.
Here’s the thing: Forming and checking for all the triplets makes you do a lot of unnecessary work. How could you be sure that once you’ve checked for a particular number in the triplet, you should not consider evaluating other triplets with that number. Try to think about this. Let me give you a hint:
Would sorting help?
Take the following examples along with you:
Read ahead only after you’ve had a thought on this for at least 10 minutes.
Understand the following things:
Suppose that I have a sorted array in decreasing order:
8, 5, 4, 3, 2, 1
consider the triplet 8, 5, 4 as a, b, c
Here a>=b>=c
Therefore a+c would definitely be more than b and a+b would be definitely more than c.
What remains is we need to check if b+c > a. Two cases arise.
- b+c is more than a:
If this is the case, then for sure we have the triplet that satisfies the given conditions. But you tell me, should we search more triplets in order to get the maximum perimeter? Isn’t this the maximum possible perimeter? Because any other triplet will be formed by the numbers after these and would thus obviously form a smaller sum. Thus we start from the maximum number in the sorted array, consider it’s triplet, if it satisfies the conditions, it’s the maximum perimeter. In our case 5+4 > 8, thus the solution is [8, 5, 4]
- b+c is not more than a:
Consider 8, 5, 3, 3, 2, 1. Here 5+3 is not more than 8. But do you think we should really check for other combinations along with 8? No, right? Because all of the other combinations would be
8, 5, 3
8, 5, 2
8, 5, 1
8, 3, 3
8, 3, 2
8, 3, 1
And some other…...
You should notice that as we have sorted the array in decreasing order, the other two numbers sum would be less than or equal to 5+3. Thus, even their sum would not be greater than a. The condition won’t be satisfied then as well. So no need to check for them. Just move to the next element and check for its corresponding triplet. In our case the next would be [5, 3, 3]
Which satisfies a point.
Algorithm
- Sort the sides array in increasing order.
(You could also sort in decreasing order and accordingly manipulate the for loop and conditions)
- Starting from last element at index i
a. if sides[i-2]+sides[i-1]>sides[i], then
output these three sides as result triplet and
break from the loop.
- If no triplet is found, output -1.
Simple right? Try to code this. You just have to check for the conditions.
Implementation
Following is the code in Java to this problem:
import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.Arrays; public class PerimeterTriangle { public static void main(String[] args) throws IOException { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); br.readLine(); br.readLine(); //no. of elements in array int n = Integer.parseInt(br.readLine()); int[] sides = new int[n]; String[] input; input = br.readLine().split(" "); for(int i=0; i<n; i++){ sides[i] = Integer.parseInt(input[i]); } Arrays.sort(sides); boolean flag = false; //starting from end, because we have sorted in //ascending order and we want the max element //first, you could also sort in descending order //and start from i=0 for(int i=n-1; i>=2; i--){ if(sides[i-2]+sides[i-1]>sides[i]){ System.out.println(sides[i-2]+" "+sides[i-1]+" "+sides[i]); flag = true; break; } } if(flag==false){ System.out.println("-1"); } } }
Complexity
Time complexity
O(nlogn)
where n is the number of elements in the array
Space complexity
O(1)
We are not using any data structure to store anything.
|
https://iq.opengenus.org/maximum-perimeter-of-triangle/
|
CC-MAIN-2020-24
|
refinedweb
| 820
| 65.22
|
UWP and Azure Service Bus
Today's Hardware Friday post comes to us via MSDN Magazine and is an "internet of things" kind of project, meshing Azure with a smart thermostat (making it Smart++?
Here’s a bold prediction: Connected devices are going to be big business, and understanding these devices will be really important for developers not too far down the road. “Obviously,” you say. But I don’t mean the devices on which you might read this article. I mean the ones that will keep you cool this summer, that help you wash your clothes and dishes, that brew your morning coffee or put together other devices on a factory floor.
In the June issue of MSDN Magazine (msdn.microsoft.com/magazine/jj133825), I explained a set of considerations and outlined an architecture for how to manage event and command flows from and to embedded (and mobile) devices using Windows Azure Service Bus. In this article, I’ll take things a step further and look at code that creates and secures those event and command flows. And because a real understanding of embedded devices does require looking at one, I’ll build one and then wire it up to Windows Azure Service Bus so it can send events related to its current state and be remotely controlled by messages via the Windows Azure cloud.
Until just a few years ago, building a small device with a power supply, a microcontroller, and a set of sensors required quite a bit of skill in electronics hardware design as well as in putting it all together, not to mention good command of the soldering iron. I’ll happily admit that I’ve personally been fairly challenged in the hardware department—so much so that a friend of mine once declared if the world were attacked by alien robots he’d send me to the frontline and my mere presence would cause the assault to collapse in a grand firework of electrical shorts. But due to the rise of prototyping platforms such as Arduino/Netduino or .NET Gadgeteer, even folks who might do harm to man and machine swinging a soldering iron can now put together a fully functional small device, leveraging existing programming skills.
To stick with the scenario established in the last issue, I’ll build an “air conditioner” in the form of a thermostat-controlled fan, where the fan is the least interesting part from a wiring perspective. The components for the project are based on the .NET Gadgeteer model, involving a mainboard with a microcontroller, memory and a variety of pluggable modules. The mainboard for the project is a GHI Electronics FEZ Spider board with the following extension modules:
- From GHI Electronics
- Ethernet J11D Module to provide wired networking (a Wi-Fi module exists)
- USB Client DP Module as power supply and USB port for deployment
- Joystick for direct control of the device
- From Seeed Studio
- Temperature and humidity sensor
- Relays to switch the fan on or off
- OLED display to show the current status
Together, these parts cost around $230. That’s obviously more than soldering equivalent components onto a board, but did I mention that would require soldering? Also, this is a market that’s just starting to get going, so expect prices to go down as the base broadens.
...
Thermostat Functionality
Implementing local thermostat functionality for this sample is pretty straightforward. I’ll check temperature and humidity on a schedule using the sensor and switch the fan connected via one of the relay ports off or on when the temperature drops below or rises above a certain threshold. The current status is displayed on the OLED screen and the joystick allows adjusting the target temperature manually.
...
Provisioning
In the beginning, the device is in “factory new” state—the device code has been deployed but the device hasn’t yet been initialized and therefore doesn’t have any current settings. You can see this state reflected in the GetSettings method when the settings object is still null and is therefore initialized with default settings.
Because I want to let the device communicate with and through an Internet infrastructure—Windows Azure Service Bus—I need to equip the device with a set of credentials to talk to that infrastructure and also tell it which resources to talk to. That first step of setting up a factory new device with the required network configuration and setting up the matching resources on the server side is known as provisioning; I discussed the basic architectural model for it in the previous article.
...
Sending Events and Receiving Commands
With provisioning completed, the device is now ready to send events to the Windows Azure Service Bus events Topic and to receive commands from its subscription to the devices Topic. But before I go there, I have to discuss a sensitive issue: security.
As I mentioned earlier, SSL/TLS is an expensive protocol suite for small devices. That is to say, some devices won’t ever be able to support SSL/TLS, or they might support it only in a limited fashion because of compute capacity or memory constraints. As a matter of fact, though at the time of this writing the GHI Electronics FEZ Spider mainboard based on the .NET Micro Framework 4.1 I’m using here can nominally speak SSL/TLS and therefore HTTPS, its SSL/TLS firmware apparently can’t deal with the certificate chain presented to it by Windows Azure Service Bus or the Access Control service. As the firmware for these devices gets updated to the new 4.2 version of the .NET Micro Framework, these limitations will go away for this particular device, but the issue that some devices are simply too constrained to deal with SSL/TLS remains true in principle, and there’s active discussion in the embedded device community on appropriate protocol choices that aren’t quite as heavyweight.
...
Wrapping Up
The goal of this series on the Internet of Things is to provide some insight into the kinds of technologies we’re working on here at Microsoft to enable connected-device prototyping and development. We also want to show how cloud technologies such as Windows Azure Service Bus and analytics technologies such as StreamInsight can help you manage the data flow from and to connected devices; how you can create large-scale cloud architectures for handling a great many devices; and how to aggregate and digest information from them.
On the way, I built an embedded device that you can place on any home network and control remotely from any other network, which is pretty cool if you ask me.
I believe we’re in the very early stages of this journey. In talking to Microsoft customers from all over, I’ve seen that a huge wave of connected and custom-built devices is on the way, and this is a great opportunity for .NET developers and innovative companies looking to build cloud offerings to connect to these devices and to combine them with other connected assets in inventive ways. Let’s see what you can do.
As you would expect, the source for the project is available here. Here's a snap of the Solution.
What's the story behind all these Projects? RTFR (Reading The Fine Readme
Code Artifacts:
The solution contains several directories and sub-projects:
/AzureBackend – This is the Windows Azure deployment project for the cloud backend service.
/BackendRoleTest – This is a very simple one-shot command line test to talk to the provisioning endpoint, configured to talk to the local dev-fabric.
/BackendWebRole – This is the backend web role hosting the Provisioning.svc endpoint as well as the Passthrough.svc custom-gateway endpoint implementing a simple hash-based model as a replacement for SSL to be used alternatively to talking directly to Service Bus.
In the web.config file you will find the following <appSettings>
* sharedSignature – replace the value with a base64 encoded 256-bt (16 byte) random binary key or with the same value as ‘managementKey’ below. This is the shared signing key used for the non-SSL scenario.
* serviceBusNamespace – replace the value with the name of a Service Bus namespace you provisioned. Just use the prefix (i.e. myownnamespace) and not the .servicebus.windows.net suffix. As you create the namespace in the Windows Azure Portal, you should create two Topics, named ‘devices’ and ‘events in that namespace.
* managementKey – replace the value with the ‘owner’ key of the Service Bus namespace, which you can also obtain from the portal.
/Microsoft.ServiceBus.AccessControlExtensions – This is a set of utility classes for managing Service Bus Access Control taken from the Windows Azure Service Bus SDK ‘Authorization’ scenario sample
/Microsoft.ServiceBus.Micro – This is a .NET Micro Framework client for Service Bus implementing Send and Receive operations and token acquisition from ACS over HTTPS and the alternative non-SSL model that’s using the Passthrough.svc gateway included in this sample. It also includes a SHA-256 implementation by Elze Kool () and an NTP client written by Michael Schwarz from which is included under its own license as expressed in the file.
/ServiceBusApp – is the actual Air Conditioner code for .NET Gadgeteer requitring the setup explained in the article. The ‘serverAddress’ in Program.cs must be set to the deployment location of the backend web role. This application will only function when run on the actual hardware.
/ServiceBusMicroTest – is small test app to run in the .NET Micro Framework emulator to call the provisioning logic and send/receive messages.
Also in the readme are additional URL's if you don't have the .NET Gadgeteer SDK installed or need an Azure account.
If you're been wondering what it would take to create a "smart, internet of things" kind of device, are interested in see how a .NET Gadgeteer can talk to Azure, or interested in building interconnected devices and service using the power of the Service Bus, this article and source is just a quick click away...
|
https://channel9.msdn.com/coding4fun/blog/How-cool-is-this-bus-Using-the-Service-Bus-to-create-a-cloud-enabled-smart-thermostat
|
CC-MAIN-2018-13
|
refinedweb
| 1,670
| 58.32
|
This article demonstrates the form of DNS query messages and how to submit a request to a DNS server and interpret the result. I have wrapped this functionality up into a small easy-to-use C# assembly which you can easily deploy in your own applications to make ANAME, MX, NS and SOA queries. The assembly is entirely safe managed code, and it works just as well on Linux with the Mono CLR as it does on Windows. This article and the code supplied with it has been built from the information in RFC1035 - Domain names - implementation and specification.
Until recently, spam was something which annoyed other people; I never got any. Somehow though, my email address has ended up on a mailing list and spam started showing up in my inbox. At first this was just a minor nuisance, but I expect my email address was sold on with thousands of others to other spammers and now I receive about 100 junk emails a day - a major annoyance. Why are they so convinced I need Viagra? I'm not that old...
Outlook to its credit has proven effective at detecting and destroying this junk, but I still didn't like the fact that it had to be picked up only to be discarded. The slightly protracted Send and Receive that normally indicates the arrival of a new attention-worthy email became more and more disappointing, and I wondered what someone would do if they were still stuck with a dial-up connection. It would be intolerable.
My solution was to create my own SMTP/POP3 server combination which could detect junk and destroy it, long before it could trouble Outlook, and this would present a good chance for me to get my .NET socket programming up to speed having used it very little in the past. I could use one of my Linux servers in Red Bus in London which I use for online game-hosting to host the mail server (via the glorious gift of Mono), but to do this would mean having to use completely managed C# without any P/Invoke calls. I got to work.
SMTP and POP3 are pretty simple text based standards to implement, but I hit a problem. In order for my SMTP server to relay my messages to other servers, an MX lookup would be required. An MX lookup is the act of retrieving the hostname(s) of one or more mail servers which handle a domain's email. A quick look at System.Net.Dns showed that the framework didn't support this, and as already mentioned I couldn't go down the Interop path. The only other alternatives were expensive commercial components which were out of the question because I certainly wasn't going to end up out of pocket on account of some spammers. My project had just got bigger.
System.Net.Dns
On the surface of it, DNS seems pretty straightforward, simply converting names to numbers, or retrieving other information about a domain. It's actually a huge and complicated subject and books on it tend to be quite wide. Thankfully, for the purpose of what we're doing we don't need to understand very much at all - just how to create a query, send it to a server, and interpret the response. The most common query that a DNS server deals with is the ANAME query, which maps domain names to IP addresses (codeproject.com to 209.171.52.99, for example). System.Net.Dns.GetHostByName performs ANAME lookups. Probably the next most common type of query is the MX query.
System.Net.Dns.GetHostByName
Unlike many of the internet protocols which are text based, DNS is a binary protocol. DNS servers are some of the busiest computers on the internet, and the overhead of string-parsing would make such a protocol prohibitive. To keep things fast and lean, UDP is the transport of choice being lightweight, connectionless and fast in comparison to TCP. To communicate with a DNS server, you simply throw a single UDP packet at it and it throws one back. Oh, and these packets cannot exceed 512 bytes in length. (Incidentally, many firewalls block UDP packets larger than 512 bytes in length.)
The diagram below shows the binary request I sent to my DNS server to look up the MX records for the domain microsoft.com and the corresponding response I received. To do this, I sent a 31 byte UDP packet to port 53 of my DNS server as shown below. It replied with a 97 byte response again on UDP port 53.
Both request and response share the same format, which starts with a 12 byte header block. This starts with a two byte message identifier. This can be any 16 bit value and is echoed in the first two bytes of the response, and is useful as it allows us to match up requests and responses as UDP makes no guarantees about the order in which things arrive. After that follows a two byte status field which in our request has just one single bit set, the recursion desired bit. Next comes a two byte value denoting how many questions there are in the request, in this case just 1. There then follows three more two byte values denoting the number of answers, name server records and additional records. As this is a request, all these are zero.
The rest of the request is our single question. A question consists of a variable length domain name, a two byte QTYPE and a two byte QCLASS in that order. Domain names are treated as a series of labels, labels being the words between dots. In our example microsoft.com consists of two labels, microsoft and com. Each label is preceded by a single byte specifying its length. The QTYPE denotes the type of record to retrieve, in this example, MX. QCLASS is Internet.
QTYPE
QCLASS
The response we get back tells us that there are three inbound mail servers for the domain microsoft.com, maila.microsoft.com, mailb.microsoft.com and mailc.microsoft.com. All three have the same preference of 10. When sending mail to a domain, the mail server with the lowest preference should be tried first, then the next lowest etc. In this case, there is no preference difference and any of the three may be used. Let's look a bit more closely at the response.
You may have noticed that the first 31 bytes of the response are very similar to the request, the only difference being in the status field (bytes 2 & 3) and the answer count (bytes 6 & 7). The answer count tells us that three answers follow in the response. I refer those who are interested in the make up of the status field to the above RFC section 4.1.1, as I will not cover that here. You'll also notice that the question is echoed in the response, something which seems rather inefficient to me, but that's the standard. The first answer starts at byte 31 (0x1F).
The first part of any answer embeds the question in it so if you ask more than one question you know to which question the answer refers. A shortened form is used - rather than repeating the domain microsoft.com explicitly here which is wasteful when we've only got 512 bytes to play with. We reference the existing domain definition at byte 12 (0x0C). This requires just two bytes instead of 15 in our example. When examining the label length byte which precedes a label, if the two most significant bits are set, this denotes a reference to a previously defined domain name and the label does not follow here. The next byte tells you the position in the message of the existing domain name. Again the QTYPE and QCLASS follow, and then we start to see the part which is the answer.
The next four bytes represent the TTL (time to live) of the record. When a DNS server can't answer a question explicitly, it knows (or can find out) another server which can and asks that. It will then cache this answer for a certain period to improve efficiency. Every record in the cache has a TTL after which it will be destroyed and re-fetched from elsewhere if needed.
The next two bytes tell the size of the record, the next two the MX preference, and then follows the variable length domain name. Here we only specify the mailc part of the domain, and then again reference the rest of the domain name at byte 12 (to produce mailc.microsoft.com). Two almost identical records follow for maila.microsoft.com and mailb.microsoft.com.
Now that everything is as clear as opaque crystal, let's look at the way you can use the supplied component to perform the domain look up for you. You will need to reference the assembly and import the Bdev.Net.Dns namespace. The following code illustrates the example above:
Bdev.Net.Dns
// Shameful hardcoding of my DNS server
IPAddress dnsServerAddress = IPAddress.Parse("194.74.65.68");
// Retrieve the MX records for the domain microsoft.com
MXRecord[] records = Resolver.MXLookup("microsoft.com",
dnsServerAddress);
// iterate through all the records and display the output
foreach (MXRecord record in records)
{
Console.WriteLine("{0}, preference {1}",
record.HostName, record.Preference);
}
This uses a simplified form of the interface which is for MX records. You could also do the same query, with the code which follows:
// Further shameful hardcoding of my DNS server
IPAddress dnsServerAddress = IPAddress.Parse("194.74.65.68");
// create a request
Request request = new Request();
// add the question
request.AddQuestion(new Question("microsoft.com",
DnsType.MX, DnsClass.IN));
// send the query and collect the response
Response response = Resolver.Lookup(request, dnsServerAddress);
// iterate through all the answers and display the output
foreach (Answer answer in Answers)
{
MXRecord record = (MXRecord)answer.Record;
Console.WriteLine("{0}, preference {1}",
record.HostName, record.Preference);
}
One annoyance with this component is that you have to explicitly tell the resolver the IP address of the DNS server to query, every time you do a look up. Ideally, it would use one of the default DNS servers so that you could leave this parameter out, but I have been unable to find a way of getting this information programmatically. If you know a way, then please let me know. It's haunting me.
Two things to note about DNS servers. Firstly, some support a TCP connection on port 53 in addition to UDP, and this can be used to get round the 512 byte limitation. Many do not, and I have not provided a TCP implementation. Secondly, although the protocol allows more than one question per request, many servers don't, probably to try and keep things within the 512 byte limit. I recommend that you only ever use a single question per request. If the response doesn't fit into 512 bytes, there is a truncation bit which is set in the status field.
Please report back to me any bugs/enhancements.
Initial release.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
if (domain.Length ==0 || domain.Length>255 || !Regex.IsMatch(domain, @"^[a-z|A-Z|0-9|-|_]{1,63}(\.[a-z|A-Z|0-9|-|_]{1,63})+$"))
{
// domain names can't be bigger tan 255 chars, and individal labels can't be bigger than 63 chars
throw new ArgumentException("The supplied domain name was not in the correct form", "domain");
}
// do a sanity check on the domain name to make sure its legal
if (domain.Length ==0 || domain.Length>255)
{
// domain names can't be bigger tan 255 chars, and individal labels can't be bigger than 63 chars
throw new ArgumentException("The supplied domain name was not in the correct form", "domain");
}
try
{
System.Net.Dns.GetHostAddresses(domain);
}
catch {
throw new ArgumentException("The supplied domain name was not in the correct form", "domain");
}
Public Shared Function MXRecordLookup(ByVal eMailAddress As String) As List(Of String)
Dim lstServers As New List(Of String)
' Try each server in order until we get hits...
For Each dnsServerAddress As IPAddress In ListMyDNS()
Dim request As Request = New Request()
' add the question
request.AddQuestion(New Question(GetMailDomain(eMailAddress), DnsType.MX, DnsClass.IN))
' send the query and collect the response
Dim response As Response = Resolver.Lookup(request, dnsServerAddress)
' iterate through all the answers and add the servers to the list
For Each answer As Answer In response.Answers
Dim record As MXRecord = DirectCast(answer.Record, MXRecord)
lstServers.Add(record.DomainName)
' Once we have servers stop
If lstServers.Count > 0 Then Exit For
Return lstServers
End Function
request.AddQuestion(New Question(GetMailDomain(eMailAddress), DnsType.MX, DnsClass.IN))
internal ANameRecord(Pointer pointer)
{
byte[] b = new byte[4];
for(int i = 0; i<4; i++)
b[i] = pointer.ReadByte();
_ipAddress = new IPAddress(b);
}
69.10.233.10
ref Pointer
ref
if (domain.Length ==0 || domain.Length>255 || !Regex.IsMatch(domain, @"^[a-z|A-Z|0-9|-|_]{1,63}(\.[a-z|A-Z|0-9|-|_]{1,63})+$"))
if (domain.Length ==0 || domain.Length>255 || !Regex.IsMatch(domain, @"^[a-zA-Z0-9\-_]{1,63}(\.[a-zA-Z0-9\-_]{1,63})+$"))
Dim strQuery As String = _
"SELECT * FROM Win32_NetworkAdapterConfiguration WHERE IPEnabled = True"
Dim query As ManagementObjectSearcher = New ManagementObjectSearcher(strQuery)
Dim queryCollection As ManagementObjectCollection = query.Get()
Dim dnsservers As String()
For Each mo As ManagementObject In queryCollection
dnsservers = mo("DNSServerSearchOrder")
Exit For
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/12072/C-NET-DNS-query-component?msg=2625146
|
CC-MAIN-2016-36
|
refinedweb
| 2,296
| 63.9
|
OKay here's the program Im suppose to write.. it involves looping and branching(I think :confused: )
******************************************
************************************************************************Code:
#include <iostream>
int main()
{
using namespace std;
string fullname,Student_No,Phone;
int option;
char Sex;
int Age;
cout << "1)... Add Student Information" << endl;
cout << "2)... View Last Students Information" << endl;
cout << "3)... Quit" << endl;
cin >> option;
if (option == 1)
{
cout << "Name:" << endl;
cin >>fullname;
cout << "Sex" << endl;
cin >>Sex;
cout << "Please enter age of Student:" << endl;
cin >>Age;
cout << "Please enter the Student's Number:" << endl;
cin >>Student_No;
cout << "Please enter the Students phone number"<< endl;
cin >>Phone;
}
if (option == 2)
{}
return 0;
}
WHat I need help with is:
You see option 2.. what do I type in, in order to view the last student info
AND
when entering option 3..how do I quit the program.... I am sorry If my english isnt the best..
MANY MANY thanks in advance to anyone that can help me :D
|
https://cboard.cprogramming.com/cplusplus-programming/76719-school-assignment-need-help-printable-thread.html
|
CC-MAIN-2017-43
|
refinedweb
| 156
| 79.9
|
hi christian,
On 10/6/06, UpAndGone <upandgone@web.de> wrote:
> Hi all,
>
> I would like to use an existing storage server that has a REST interface. So
> storing files is basically writing its contents to a HTTP stream. Would it
> be possible to implement this form of a file system for jackrabbit?
theoretically, yes. however, IMO it wouldn't make sense. jackrabbit uses
the FileSystem abstraction for persisting internal repository state (e.g.
custom node type defininitions, registered namespaces etc.). some
persistence managers use the FileSystem for persisting serialized item
state (ObjectPersistenceManager, XMLPersistenceManager).
see
if you'd use such an http based FileSystem with one of these persistence
managers performance would be painfully slow, at best.
>
> I briefly looked at the FileSystem interface. Problems I see with
> getRandomAccessOutputStream() and length() on remote files. Is implementing
> this interface the only thing I need to worry about? Also I remember I read
> somewhere on the jackrabbit website that writing a PM for an existing data
> store is not encouraged, is it?
correct, see
cheers
stefan
>
> Many Thanks
> Christian M.
>
>
|
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200610.mbox/%3C90a8d1c00610090244n4b4e48a3te66c8570ba5eec4c@mail.gmail.com%3E
|
CC-MAIN-2016-44
|
refinedweb
| 178
| 50.63
|
15 September 2009 06:46 [Source: ICIS news]
By Bohan Loh and Jeremiah Chan
SINGAPORE (ICIS news)--Spot prices of fibre intermediates paraxylene (PX) and purified terephthalic acid (PTA) in Asia could extend their slide if producers do not cut output amid weak demand, market sources said on Tuesday.
PX was in danger of breaking $900/tonne (€612/tonne) CFR (cost and freight) ?xml:namespace>
A Japanese trader said current spot PX discussions have been below this level.
“The key issue here is that there has been no real buying interest from the PTA makers,” said a Korean PX producer.
“If regional producers do not reduce operating rates soon, prices may continue to fall further,” he added.
Spot PX values fell a hefty $25-35/tonne on Monday afternoon to close at a six-month low of $900-920/tonne CFR Taiwan, according to global chemical market intelligence service, ICIS pricing.
“My customers were asking for cargoes below $890/tonne on a CFR basis,” said another trader who deals on aromatics in southeast Asia.
Heading in the same southward path was PTA, which could fall below its key support of $800/tonne CFR China Main Port (CMP) soon as sentiment on
This price was not seen over the past four months.
On Monday, trading of PTA futures came on the verge of being halted after falling as much as 3.65% on the Zhenzhou Commodity Exchange (ZCE), which allows a maximum 4% price swings for commodity futures contracts.
This may have caused the sharp downward spiral of PX values on the same day, market sources said.
PTA spot market players have been increasingly tracking movements on the ZCE futures contracts to gauge market sentiment.
For PX, there was an abundance of prompt delivery cargoes available that allowed end-users and buyers to scout around for the best bargains, market sources said.
A Chinese end-user decided to hold back its purchases even after receiving multiple PX offers for delivery in the second half of September, traders said.
Demand for the fibre intermediates may not pick up after the week-long Chinese National Day holidays from 1-8 October, market sources said.
“Market players [in
In the fourth quarter of last year, PTA makers incurred heavy losses as the commodity nearly shed 30% of its value over a three-month period, based on ICIS pricing data.
Declines in crude and naphtha values further compound the price pressures on PX and PTA, market sources said.
(
|
http://www.icis.com/Articles/2009/09/15/9247343/asia-px-pta-may-break-key-price-supports-on-weak-demand.html
|
CC-MAIN-2014-42
|
refinedweb
| 413
| 56.39
|
Dear All,
I am trying to translate C code to inline assembly to check whether I get a (timing) performance improvement.
However, I am having problems to use my variables from C code into inluine assembly code.
For example, I have:
u8x8_out = vqmovn_u16(u16x8_tmp);
and the following assembly code:
And the error message I get is:
Error: Neon quad precision register expected -- `vqmovn.u16 d22,d22'
I have seen other examples but they always seem to show "r" inputs/outputs. Is it that I always need load data via registers? Or though an intermediate quadword/doubleword?
Thanks in advance for the help.
- 00
- 00
Sorry for the late response.
I tried the example you used and I get vqmovn generated without errors. What is the type of the input arg (u16x8_tmp). Is it defined as uint16x8_t?
#include "arm_neon.h"
uint8x8_t foo(uint16x8_t u16x8_tmp)
{
uint8x8_t u8x8_out;
asm(
"VQMOVN.U16 %0, %1;\n"
:"=w"(u8x8_out)
:"w"(u16x8_tmp)
:
);
return u8x8_out;
}
clang -mfloat-abi=softfp -mfpu=neon -ccc-gcc-name -mcpu=krait2 -S asm.c
foo:
@ BB#0:
vmov d17, r2, r3
vmov d16, r0, r1
@APP
VQMOVN.U16 d16, q8;
.code 16
@NO_APP
vmov r0, r1, d16
bx lr
.Ltmp0:
.size foo, .Ltmp0-foo
Hi Raja,
Thanks for your answer. I was using gcc to compile this codeand for some reason it cmplained about this snippet. This specific one I tried from a bug report I found on the web. I did manage to get it to work and switched the real snippet I intended to use (UDIV insruction)
For the sake of completeness:
What is the type of the input arg (u16x8_tmp). Is it defined as uint16x8_t? Yes it is defined as that type.
I also put here the code snippet, for whoever may need it:
I compiled with the suggested flags and it worked. ;)
Francisco
|
https://developer.qualcomm.com/forum/qdn-forums/increase-app-performance/snapdragon-llvm-compiler-android/27391
|
CC-MAIN-2019-09
|
refinedweb
| 304
| 75.91
|
Trip report: Evolution Working Group at the Summer ISO C++ standards meeting (Toronto)
Andrew
The Summer 2017 ISO C++ standards meeting was held in July 10-15 at the University of Toronto. Many thanks to Google, Codeplay, and IBM for sponsoring the event, as well to folks from Mozilla, Collège Lionel-Groulx, Christie Digital Systems, and Apple for helping to organize. And, of course, we very much appreciate Waterfront International for sponsoring a banquet at the CN Tower.
We had a productive and rather harmonious Evolution Working Group (EWG) session this year in Toronto. There were 45 proposals discussed over five days and three evening sessions: a Tuesday night session on Concepts and a Thursday night joint session with SG7, the Reflection and Metaprogramming Study Group. Most of us also participated in the Monday night session on P0684R0, C++ Stability, Velocity, and Deployment Plans.
C++ Standards Committee meetings are a lot of hard work: four-hour sessions spent in smaller working groups like EWG every morning and afternoon, and a few hours spent on a topic in-depth most evenings. And on Saturday there’s a closing Plenary session with the whole group of roughly 120 experts who came from around the world to attend the meeting. But it all goes smoothly because there’s a lot of work done between meetings by the officers of WG21, the subgroup chairs, and of course the paper authors and all the attendees who (should have) read most of the papers they’ll be discussing before the presentations. There’s more work done between meetings to improve proposals: very few significant proposals are accepted on their first presentation. As Herb Sutter, Convener of WG21 says, “smooth never happens by accident.” You’ve got to be prepared if you want things to go smoothly.
There are many trip reports available online written by everyone from experienced participants to first-timers. This report is intentionally narrow. It focuses on the Evolution Working Group, where I spend my time as the working group’s scribe. This report meant to be a summary of EWG’s work in Toronto rather than an explanation of the whole C++ standards working group’s (WG21) progress.
Links are provided for all papers. The linking service should automatically forward to the latest revision of the paper, not necessarily the version discussed in Toronto. If the paper you view has a larger revision number (e.g., PxxxxR1 instead of PxxxxR0) it should incorporate feedback from the Toronto discussions.
Concepts Technical Specification merged into the draft Standard
The biggest news of the Toronto meeting is that we merged the Concepts TS into the C++ draft standard for C++20. The presentations capped off with a marathon evening session regarding removal of the template introducer syntax and the “natural syntax”. The stated goal of this proposal, P0696R0, is to remove contentious parts of the Concepts syntax so that we could successfully merge the TS into the draft Standard.
The main argument raised in favor of the natural syntax (also called “abbreviated” or “terse” syntax) is that it supports generic programming, specifically Stepanov-style Generic Programming. The emphasis is on the usage, not the language itself. Simplifying the usage of the language promotes sound programming styles and paradigms.
After much discussion, the group voted to remove these two syntaxes, noting that we can add the natural syntax in later. Examples raised were the fact that we didn’t include generic lambdas when we introduced lambdas, and
constexpr expanded greatly in its capabilities from its introduction in C++11. EWG is committed to bringing back an abbreviated syntax in future meetings, ideally before C++20 is finished.
We had six discussions on the topic of Concepts. The discussions are listed in chronological order. Later discussions can partially override decisions of earlier discussions.
- Richard Smith, Project Editor for the working draft, and Andrew Sutton, Project Editor for the Concepts TS, presented two papers, each of which received strong support.
- P0717R0: This proposal simplified the rules for determining if two constraints are equivalent. Previously, implementations had to compare concepts for equivalency token-by-token.
- P0716R0: Before the February 2017 meeting we had two ways of writing concepts: one as a function, one as a variable. This proposal unified the concept definition syntax. Specifically, it removed the keyword
booland other complexities of the variable declaration syntax.
- P0691R0 lists current issues with the Concepts TS. We addressed only Issue 21: Require parentheses around
requiresclauses to help with parsing:
requires(bool(T))).
- P0694R0: This paper accompanies a presentation from Bjarne Stroustrup on a “natural” syntax for function declarations using concepts.
- P0696R0: The Tuesday night discussion concerned this proposal–see above for a summary.
- The last discussion, on Wednesday afternoon, was to clarify to the Core Working Group (Core) that an
autoin a template argument of a type of a variable or a parameter declaration or a return type should not be valid. The discussion was meant to tie up some loose ends from Tuesday night’s decisions.
Modules Technical Specification sent out for PDTS
The big news in EWG would have been the progress we made on the Modules TS if Concepts hadn’t stolen the show. Representatives from Google and Microsoft talked about their experience adopting modules and compiler implementers proposed clarifications and modifications to the TS wording. At the closing Plenary meeting we sent the Modules TS out for its comment and approval ballot, known as PDTS. Going to PDTS early in the C++20 cycle increases the chances of polishing C++ Modules in time for inclusion in C++20.
We had eight discussions on Modules:
- P0629R0: The paper proposes a syntax,
export module M;to distinguish interfaces from implementations. Currently the only way a compiler knows if it’s compiling an interface or an implementation is a command line option or a file suffix. We approved this proposal and sent Nathan Sidwell (Facebook), implementer for GCC’s modules, off to Core.
- P0584R0: We did not reach consensus on module interface partitions—being able to split interfaces across multiple files. It’s clear some developers want partitions but it wasn’t clear to EWG members what changes should be made.
- Nathan Sidwell (Facebook) also presented about some ambiguous wording in the Modules TS. Gabriel Dos Reis, editor of the Modules TS, captured these on the Modules TS Issues list.
- P0721R0: Regarding ambiguity on the export of using declarations. We identified that the wording is ambiguous but did not reach a plan of action in the meeting. We left this for Nathan and Gabriel to finalize.
- P0714R0: Regarding exporting entities with identical names in and out of a namespace scope.
- Representatives from Bloomberg presented P0678R0, listing a set of three business requirements for modules. We agreed that the Modules TS as written satisfied these requirements.
- Modules must be additive, not invasive, such that a library can be exposed either through header files or modules to different consumers.
- Modules can support library interfaces at a higher level of abstraction.
- Modules don’t allow fragile transitive inclusions.
- Chandler Carruth from Google presented build throughput gains from their experience modifying their build system to automatically convert some common header files to be consumed as Clang modules.
- Gabriel Dos Reis from Microsoft presented about his company’s experience and expectations about using modules at scale in the huge Windows codebase and build system.
- P0713R0: Daveed Vandevoorde, an implementer of the EDG compiler, proposed that we mark the global module declaration at the top of the file. This allows a compiler parsing a module unit source file to know it’s a module when parsing the top of the file without having to be passed context from the build system, compiler switches, or filename extensions. We’ll adopt this change after the Modules PDTS is published.
Coroutines Technical Specification (and two more!)
And if moving Concepts into the Standard and moving Modules to PDTS wasn’t enough, the larger WG21 group also completed our review of the Coroutines TS, the Networking TS, and the Ranges TS. EWG’s part was to clarify that a couple of issues on the Coroutines TS (CH001 and US013) are not defects that should prevent merging the Coroutines TS into the draft Standard. See P0664R0 for more details.
C++20 is shaping up to be an exciting release!
Other evening sessions
In addition to the evening session on Concepts, we also had evening sessions with SG7, the Reflection and Metaprogramming Study Group, and a session on C++ Stability, Velocity, and Deployment Plans (P0684R0).
Many papers were discussed at Thursday’s SG7 meeting, including P0670R0, P0425R0, P0707R0, and P0712R0. P0327R2 was handled by directly by EWG in a daytime session. You can read more about the metaprogramming papers in Herb Sutter’s post: Metaclasses: Thoughts on generative C++.
One topic at Monday’s evening session on the future of C++ was about whether we can actually break code by removing deprecated features from the Standard. P0619R1, heard in EWG a couple of days later highlighted many deprecated features that could potentially be removed. After discussing three of these that concerned the core language (as opposed to library changes) we decided the only one that could be removed was
throw(), which has been deprecated for three standards.
Proposals sent to Core
Four proposals were sent to Core during this meeting. When a proposal is forwarded to Core it means that EWG has approved the design and requests that Core review wording to include this proposal in the draft Standard. It might appear that a proposal is done at this point, but it’s really only about halfway done. From the EWG perspective this is the end of the journey but it’s a long way to being part of a published Standard.
The following proposals were forwarded to Core:
- P0683R0: We previously decided we want a syntax for bitfield default member initialization. This proposal narrowed down the syntax choice.
- P0641R0: This paper concerned Issue 1331 raised by Core. The issue surfaced with wrapper types where a constructor with a parameter that is a reference to non-
constcan conflict with the defaulted copy.
- P0634R0 proposed that the
typenamekeyword be optional, e.g.,
template<class T> struct D: T::B { // No
typenamerequired here
- P0614R0: This proposed a new range-based
for (init; decl : expr)that allows initialization statements in the
forstatement itself rather than require that the initialization statement precede the
forstatement.
A few other proposals were approved by EWG but not sent immediately to Core. Some were sent to the Library Evolution Working Group (LEWG) for more work from a different perspective. Others were approved to go to Core, but not until the November meeting in Albuquerque. See below for a little more information on these, as well as some that were rejected by EWG.
Other proposals in design
WG21 is primarily a design group, and EWG’s main activity is discussing how the language should evolve. We entertained, advanced, considered, and rejected many other proposals. Here’s a list of everything else we discussed, sorted loosely into a few general topics.
Feature test macros
We had three presentations on the future of feature test macros: P0697R0, P0723R0, and a presentation called “Feature Test Macros Considered Harmful”. After much debate we decided on a small change from status quo: the document concerning feature test macros, SD-6, will remain a WG21-authored specification but we will plan to have it formally approved by WG21 as a Standing Document in a group-wide Plenary session.
Structured bindings
P0609R0: This proposal allowed for attributes such as
[[maybe_unused]] on the members of structured bindings.
Memory
- P0132R0 Explores non-throwing containers for memory-constrained environments.
- P0639R0: In past meetings we’ve talked about
constexpr_vectorand
constexprstrings. The options considered were allocators that work in a
constexprcontext or have
newand
deletework in
constexprcontexts. This proposal received strong support and will return in a future meeting.
- P0722R0 proposes another form of
operator delete()for variable sized classes. The discussion opened up a lot of questions that need to be answered before the proposal moves forward.
Argument deduction, lookup, type detection, specialization
- P0702R0: This paper addresses design clarifications for class template argument deduction. It advances ideas proposed before to EWG.
- P0389R0: This paper proposed wording clarifications to help with argument-dependent lookup for some calls to function templates. We realized during discussion that we could in fact remove the
templatekeyword in these calls altogether. A new paper is forthcoming.
- P0672R0: Provides a method to syntax to allow type detection for proxies and expression templates. It also proposes a
noeval()to disable implicit evaluation but still allow automatic type deduction.
- P0665R0 Allows specializing class templates in a different namespace using fully qualified names. This helps to preserve code locality.
Lambdas
- P0624R0: This proposes default constructible and assignable stateless lambdas, allowing them to be used where function objects are today. Programmers—or meta-programmers—could create in-line a piece of code that can be stored and retrieved from the type system.
- P0238R1: This proposal aims to make lambdas more useful with constrained libraries. It received strong support as well as encouragement to work on a terser lambda syntax.
Indexing into bitfields and tuple-like types
- P0573R1: We encouraged the
bit_sizeofand
bit_offsetoperators to wait for the Reflection study group to make progress that can enable these operators.
- P0327R2 concerns
std::product_type. We don’t yet have a syntax to propose product type operators to get the size and nth element. Expect this to return to EWG.
Precise assertions & marking unreachable code
- P0681R0: Lisa Lippincott continued examining the precise semantics of assertions. At the end of this presentation we identified three proposals we’d like to see explored further, two in EWG in conjunction with Contracts, and one,
std::unreachable, in LEWG.
- P0627R2: A
std::unreachabletype was endorsed and forwarded to LEWG for further discussion.
- P0627R1: This proposal suggests an attribute to mark unreachable code similar to
__builtin_unreachable()or
__assume(false).
Proposals that we discouraged
Some proposals, no matter how well-reasoned and insightful they may be, are just not seen to be a good fit for the language at this time. Some proposals seem like they would introduce too much complexity if adopted. Others are just good ideas that won’t fit in the language. EWG discouraged further work on the following proposals unless there are fundamental changes to the approach that would make them more palatable to the group.
- P0312R1: This paper proposed making pointers to members callable for the benefit of generic code. It had neither strong support nor opposition amongst the group, but faces strong National Body opposition. Because a draft Standard cannot be approved without National Body consensus it’s incumbent upon the author to work to achieve this consensus before we can move forward.
- P0671R0: Named function parameters—or “parametric functions” are a common feature in other languages. They have been repeatedly suggested for C++ in different forms, but the syntactic implications are difficult to work through.
- P0654R0: Add
explicitto a
structto require all members be initialized. This proposal is interesting, but as compilers can verify that all members are initialized possibly we’d want the opposite approach to suppress the compiler’s verification on a
struct.
- P0637R0: allow the lambda by-value capture of
*thisto rebind this to arbitrary objects. In a lambda, capture of
*thiscan only be captured by name, not by initializer. This proposal is for an init-capture
*this.
In closing
It was a great meeting and, as always, a ton of work. It’s amazing to think that a group of 120-ish people can meet and decide on anything, but we accomplished quite a bit at the Toronto meeting. I’m personally looking forward to our meeting in Albuquerque this November where we can keep building an amazing C++20 release!
And as always, thank you to the hundreds of people who provide feedback and help us improve the C++ experience in Visual Studio. If you have any feedback or suggestions for our team, please).
|
https://devblogs.microsoft.com/cppblog/trip-report-evolution-working-group-at-the-summer-iso-c-standards-meeting-toronto/
|
CC-MAIN-2019-51
|
refinedweb
| 2,673
| 53.61
|
Yesterday,.
If you want to define a new type that accepts dynamic operations, then it's easy to do so, and you don't even need C# 4. You just implement
IDynamicObject IDynamicMetaObjectProvider, which is the interface that tells the DLR, "I know how to dispatch operations on myself." It's a simple interface, in the spirit of IQueryable, with a single method that returns a "MetaObject" "DynamicMetaObject" to do the real heavy lifting..
So, here's an implementation:
public class MyDynamicObject : IDynamicMetaObjectProvider
{
public DynamicMetaObject GetMetaObject(Expression parameter)
{
return new MyMetaObject(parameter, this);
}
}
Simple enough! Here's the MyMetaObject definition. The
MetaObject DynamicMetaObject "knows how" to respond to a variety of actions including method calls, property sets/gets, etc. I'll just handle those (no best practices here; this is a minimal implementation)::
public class Program
{
static void Main(string[] args)
{
dynamic d = new MyDynamicObject();
d.P3 = d.M1(d.P1, d.M2(d.P2));
}
}
So I take my MyDynamicObject that I defined above, and then I get a few properties, call a few methods, and set a property for good measure. If you compile this, you get the following output:
GetMember of property P1
GetMember of property P2
Call of method M2
Call of method M1
SetMember of property P3
I think that's pretty cool..
Previous posts in this series: C# "dynamic"
Hey, Chris — do you think you could explain the parameter argument to GetMetaObject, and how/why it’s supposed to be used?
Hi Keith,
The DLR uses expression trees internally to communicate rules and actions that are the result of binding particular operations. The parameter parameter is a ParameterExpression that represents the target object.
I would prefer to spend my time talking about C#, so I’m not going to pursue the details of the DLR implementation much deeper than that. Unfortunately I also don’t have any PDC-era pointers for you, although Martin Maly’s blog is a great resource (). You should install the CTP and debug around. The code I’ve posted compiles and works with it.
chris
Unfortunately, Martin’s blog is absent this info. I’ll see if I can prod him or his cohorts about it if I remember. It’s been a while since I was active in DLR-land (ie, shortly after I joined MSFT).
Back to C# itself, part of why I ask, actually, is in response to the known limitation dynamic method binding in C# not being able to bind extension methods. I’m pondering what sort of workarounds would be possible, if any.
For that matter, I wonder why it’s even a limitation? The compiler could embed information about what extension methods would have been in-scope at that location during the original compile, and pass that to the late-binding algorithm. Or is this the DLR’s limitation? (I remember looking into it a while ago, and almost had it hacked into submission)
Or is it just below the feature bar, like yield foreach? 🙂
Here are a few good resources that you ought to look at for information about dynamic from PDC: Anders
@Keith
My guess that the reason of not being able to bind extension methods is because this piece of info of whether an extension method is in scope of not is not available to the binders at runtime. The default binder for CLR objects can only find out what members a type has, and extension methods are not a part of the type to be extended; there isn’t quite a way to pass this piece of info into the DLR in the way it is designed now.
Well, someone might be able to get a smarter binder to do this, of course. If a binder’s constructor can take a list of possibly applicable extension methods’ MethodInfo (which the C# compiler would know at compile time), then it’s possible for the binder to get extension methods working.
@RednaxelaFX
Your second paragraph was the point I was trying to make. Having spelunked through earlier releases about a year ago, I know roughly where the DLR would need that information. It’s just a matter of putting it there and, of course, testing it to within an inch of someone else’s life.
That, unfortunately, I think would be the biggest blocker in having this capability in C#4.
@Keith
Well, I’d like to put it this way:
Think about it, in IronPython, Python types that map onto plain CLR types can have extensions over them; and the same is with IronRuby’s Ruby types and all other languages based on the DLR, except…C# and VB I guess. The reason for this is that C# and DLR make extension types in a different way. C# extension methods are just syntactic sugar resolved at compile time and doesn’t really carry into runtime, where as IronPython extension methods for Python types register themselves to the binder with attributes. Thus C# extension methods are only available in a certain scope at compile time (which is somehow a good thing, less confusion), while Python extensions affect the whole Python part of the program.
If one wants the same semantics as what IronPython extensions give, he/she can simply implement extension methods simillar to the way IronPython does today — with the restriction that the target type to be extended has to implement IDynamicObject for custom dynamic lookups, which, probably turns into inheriting from System.Dynamic.DynamicObject, which might not be feasible.
The other way around, the C# compiler will have to generate a list of possibly applicable extension methods in scope, and pass that list into the call payload at the call site, so that C# binder has a chance of picking it up later at runtime. Haven’t thought about the implications of this; it might imply further complicated scoping rules, and that’d be a problem.
@Chris
I’d like to ask a question: where’s that System.Dynamic.DynamicObject class in Anders’ and Jim’s talk? Didn’t find it in System.Core.dll in the CTP. Tried the IDO impl sample from C# Future and it worked, though.
Re: the lack of support for extension methods. You’re both on the right track. Extension methods are a compile-time feature that would have required us to push some context into the C# Runtime Binder. It wouldn’t have been impossible, but consider that extension methods are often used in LINQ scenarios and that there, we have a bigger problem of converting lambdas to "dynamic." So we decided to punt on this for C# 4. All decisions like this are difficult and complicated.
RednaxelaFX, DynamicObject is not in the CTP. I mentioned that in Part III. Sorry!
Hey RednaxelaFX – if you want to look at a more detailed example on how to implement IDynamicObject take a look at my post – I don’t go into details, but it might help you figure the parameters out.
Cheers,
Tobi
@Tobi
Thanks for the link. 🙂
I didn’t have any problems with understanding IDynamicObject and implementing my own, though…in fact I implement it a few times already. I implemented my first one back in May for my bachelor DP. Later on, the whole IDynamicObject interface changed and got deprecated as the IOldDynamicObject, and a newer version of IDynamicObject appeared, with a clearer MetaObject Protocol.
I’m writing a converter in a different perspective from yours, and I’ll see if I can clean the code up a little so that I can write a post about it. Thanks again for the link, didn’t think of using the Castle library before.
Welcome to the 47th Community Convergence. We had a very successful trip to PDC this year. In this post
@RednaxelaFX
Sorry, totally phased out and copied the wrong name. 😀 I guess I was thinking about Keith J. Farmer. Sorry. 😀
Anyway, I am looking forward to your post about the converter. I would be glad, if you could drop me a link!
Cheers,
Tobi
Today, let’s geek out about the language design regarding the dynamic type. Type in the language vs.
I’m messing around with the VS 2010 CTP but I can’t find the System.Dynamic namespace. What assembly is this in?
@KeithH,
It’s not there in the CTP. But you may want to check out the lastest version of DLR, which has this System.Dynamic namespace. You can find it in the source drops on IronPython’s CodePlex site.
Let’s look at this: dynamic d = null ; object o = d; // not an implicit conversion Last time , I said
I’m having a difficult time grocking MetaObject Call, and getting it to actually return an object.,
|
https://blogs.msdn.microsoft.com/cburrows/2008/10/28/c-dynamic-part-ii/
|
CC-MAIN-2017-39
|
refinedweb
| 1,459
| 61.16
|
TIP: Round a Decimal to an Integer
How will your code round decimals to integers? Most people will focus the decimal fraction on the round off topic, so that they will separate the integer and the decimal fraction at first, and round the integer if the decimal fraction is equal to or lager than 0.5. So, here is a common code:
// Original round off function. template <class T> int Round(T &value) { int nInt = (int) value; T tDecimal = value - nInt; if( 0 <= value) { if( tDecimal >= 0.5 ) nInt += 1; } else { if( tDecimal <= -0.5 ) nInt -= 1; } return nInt; }
However, you don't need to determine so many things. The system will copy only the integer number if you cast the value to be the type of the integer. See the disassembly code of int nInt = (int) value, as shown below:
0041206E mov eax,dword ptr [value] 00412071 fld qword ptr [eax] 00412073 call @ILT+220(__ftol2_sse) (4110E1h) 00412078 mov dword ptr [nInt],eax
So, here is my solution:
// New round off function. template <class T> int Round(T &value) { if( 0 <= value) return (int) (value + 0.5); else return (int) (value - 0.5); }
The value will round off after adding 0.5 if its decimal fraction is larger than 0.5.
Furthermore, this solution also will save operations, stacks, so as time, that is, the new round off solution is simpler and more efficient. You can see both round off functions in the disassembly to compare. You will feel the benefit is obvious, especially when running a game engine.
Let me test by using the following code:
int _tmain(int argc, _TCHAR* argv[]) { double d = 0; int n = 0; d = 123.55; n = Round(d); printf("Round(%f) is %d\n", d, n); d = 123.05; n = Round(d); printf("Round(%f) is %d\n", d, n); d = -123.55; n = Round(d); printf("Round(%f) is %d\n", d, n); d = -123.05; n = Round(d); printf("Round(%f) is %d\n", d, n); return 0; }
and its output is:
Round(123.550000) is 124 Round(123.050000) is 123 Round(-123.550000) is -124 Round(-123.050000) is -123
|
https://www.codeguru.com/cpp/cpp/algorithms/math/article.php/c15071/TIP-Round-a-Decimal-to-an-Integer.htm
|
CC-MAIN-2020-10
|
refinedweb
| 362
| 74.9
|
new coming antique kids outdoor sports playground equipment
US $1000-5000 / Set
1 Set (Min. Order)
Antique Paradise Playground Equipment,Indoor Sports Games For Kids
US $80-150 / Square Meter
20 Square Meters (Min. Order)
sport Fitness Equipment For Adults ,building machines
US $600-2500 / Set
5 Sets (Min. Order)
import sports equipment/ Seated Chest Press TZ-6005
US $300-650 / Piece
1 Piece )
LJ-5531 Multi-adjustable bench antique sports equipment
US $200-400 / Piece
1 Piece (Min. Order)
ZC-2000D antique sports equipment
US $219-259 / Piece
50 Pieces (Min. Order)
2016 newest design style strength machine Antique sports equipment DFT-819 Rotary Torso/DEZHOU in shandong
US $360-500 / Piece
1 Piece (Min. Order)
antique sports equipment(TM-3000DS)
US $200-300 / Box
50 Pieces )
2015 new cheapest sport equipment,Elliptical bike,Air bike
US $1-100 / Set
200 Sets (Min. Order)
Mulifuctional sports equipment / Seated Should PressXH01
US $1200.0-1200.0 / Piece | Buy Now
1 Piece (Min. Order)
Sports equipment for exercise dumbbell
US $0.9-8.9 / Pair
1000 Pairs (Min. Order)
Factory Direct Sale AB Chair Gym Sport Equipment
US $20-30 / Set
1944 Sets (Min. Order)
carbon fiber sports equipment
US $10-100 / Piece
5 Pieces (Min. Order)
2016 new Family treadmills sports fitness equipment
US $230-299 / Set
1 Set (Min. Order)
basketball sports equipment
US $2.80-10 / Piece
10 Pieces )
fitness bike magnetic exercise bike elderly care products import sports equipment
US $51-53 / Piece
300 Pieces (Min. Order)
Fashion Design Sports Equipment And Frame Made It
US $0.99-100 / Set
1 Set (Min. Order)
commercial fitness equipment Type Sports equipment
US $400-500 / Set
3 Sets (Min. Order)
China kung fu Taiji cloud hands sport equipment suppliers
US $1000-1046 / Set
1 Set (Min. Order)
Abdominal Machine, commercial sports equipment
US $422-552 / Set
10 Sets (Min. Order)
electrical equipment / sports equipment
300 Pieces (Min. Order)
Air Walker (Three-unit) Outdoor Sports Equipment
US $295-395 / Set
1 Set (Min. Order)
Popular and multi-function outdoor fitness equipment
US $50-500 / Set
1 Set (Min. Order)
electric massage chair sports fitness equipment AMA-996B
US $198-259 / Piece
Trade tools
|
http://www.alibaba.com/showroom/antique-sports-equipment.html
|
CC-MAIN-2016-44
|
refinedweb
| 363
| 75.4
|
Introduction to Palindrome in C++
A palindrome is a number, sequence or a word that reads the same backward as forwards. Madam In Eden, I’m Adam is one of the best examples of palindrome words that sounds the same after reversing. This is where palindrome makes things interesting they act as mirrors. The name ‘palindrome’ actually means running back again according to Greek etymology. In C++ palindrome number is a number that remains the same after reverse. But how is this possible? How will we check if a number is too big and complex? Always keep in mind this small algorithm to check if a number is a palindrome or not.
- Get the input number from the user.
- Hold it in a temporary variable.
- Reverse the number.
- After reversing compare it with a temporary variable.
- If same then the number is a palindrome.
Don’t worry here is an example suppose we have to print palindromes between the given range of numbers. For example range is {10,122} then output should be {11, 22, 33, 44, 55, 66, 77, 88, 99, 101, 111, 121}
C++ program to Implement Palindrome
#include<iostream>
using namespace std;
// Function to check if a number is a palindrome or not.
int Palindrome(int n)
{
// Find reverse of n
int reverse = 0;
for (int i = n; i > 0; i /= 10)
reverse = reverse*10 + i%10;
// To check if they are same
return (n==reverse);
}
//function to prints palindrome between a minimum and maximum number
void countPalindrome(int minimum, int maximum)
{
for (int i = minimum ; i <= maximum; i++)
if (Palindrome(i))
cout << i << " ";
}
// program to test above functionality
int main()
{
countPalindrome(100,2000);
return 0;
}
Output:
Let’s take one more example specifically using n,sum=0,temp,reverse;
cout<<"Please enter the Number=";
cin>>n;
temp=n;
while(n>0)
{
reverse=n%10;
sum=(sum*10)+reverse;
n=n/10;
}
if(temp==sum)
cout<<"The number is Palindrome.";
else
cout<<"The number is not Palindrome.";
return 0;
}
Output:
The above code will take a number as an input from the user and put it into a temporary variable as you can see that sum is already 0 it will use a while loop until the number becomes 0 and as the code is written it will perform the operation as written after while loop. If the number becomes 0 then it will check if the temporary variable is equal to the sum or not. If condition satisfies then it will print that the number is palindrome otherwise if condition fails it will go to else part and will print that the number is not a palindrome.
One more example using a x, number, reverse = 0, temp ;
cout << "Please enter a number here: ";
cin >> number;
x = number;
do
{
temp = number % 10;
reverse = (reverse * 10) + temp;
number = number / 10;
} while (number != 0);
cout << " The reverse of the number is: " << reverse << endl;
if (x == reverse)
cout << " Entered number is a Palindrome.";
else
cout << " Entered number is not a Palindrome.";
return 0;
}
Output:
Advantages
- Suppose that in your project you want to match first string/element with the last one then second element/string to second last one and so on and the string will be palindrome if you reach to the middle. By just using for loop you can perform all the operations and it saves a large amount of time and space when it comes to programming because in this case, you neither have to modify the existing string nor write another variable to memory. Also, the matches required in completely equal to half of the string length.
- If you are working on a programming language where string reversal is easy but it will require an extra amount of space to store that reverse string in another way such as recursion require more stack frame. There is one more way rather than recursion and that is writing a loop in the middle of the string to check if the corresponding letter at each end is the same or not. If unequal then break the pair early and declare the string as not a palindrome.
- The above approach has the advantage of not wasting any computational resources such as recursion, without needing extra stack frames, but it’s also not simple as just reversing the string and checking the equality between them. It does take effort but it will always be less than other algorithms because that is the simplest way to find a palindrome.
- Each technique has its benefits in programming and there are thousands of other ways of doing the same task but in an efficient way. It completely depends upon your current project you are working on. You only have to decide according to your situation that which technique will help you give the best benefits irrespective of the drawbacks.
- In a real project, you need to perform n numbers of palindrome checks on a frequent basis in a short span of time then you should implement the above algorithm in the first place until and unless you require a more optimistic solution for current technical constraints.
Conclusion
By using a palindrome algorithm you can make your search more efficient and faster in finding palindromes irrespective of data types such as string character or integer. For projects that have multiple data in the different systems, these algorithms can be used to make overall performance much faster.
Recommended Articles
This is a guide to Palindrome in C++. Here we discuss the C++ program to check and implement the Palindrome with the Advantages. You may also look at the following article to learn more –
- Palindrome Program in C++
- Best C++ Compiler
- Fibonacci Series in C++
- Overloading in C++
- Overloading in Java
- C++ Data Types
- Python Overloading
- Top 11 Features and Advantages of C++
- Fibonacci Series In JavaScript with Examples
- Different Methods of Reverse String in C
- Complete Guide to Reverse String in Java
- Various Loops in Reverse String in PHP
|
https://www.educba.com/palindrome-in-c-plus-plus/
|
CC-MAIN-2020-29
|
refinedweb
| 993
| 64.14
|
High-level logic is typically full of indirection, so performance suffers when it’s used heavily. This is the case in game AI when you start adding more behaviors; a constant overhead for virtual calls quickly takes its toll.
So what do you do to optimize your engine, once the typical bottlenecks (i.e. collision queries, animation, path-finding) are taken care of? The best thing you can do is optimize your logic framework… and virtual functions are the primary target!
1) Prepare for Platform-Specific Optimizations
The first strategy is to abstract away frequent calls to virtual functions. (Yes, this means hiding slow indirection behind fast indirection!) You’ll have to do lots of experimenting to optimize for each platform, so it’s best to keep the performance sensitive code localized.
Make sure your most common observer dispatches, uses of the strategy patterns, or frequent virtual calls in general are hidden behind macros or ideally inline functions (these have no overhead in Release builds). You want to to be able make changes in very few places when optimizing.
2) Call Fewer Virtual Functions
If you have a large API, many virtual methods might use the empty default implementation, or may be called for very small operations. Even if it’s a nice modular interface, this isn’t ideal for speed.
Consider simplifying your API so all function implementations are useful and mandatory. If necessary, consolidate multiple virtual functions into one big function. (Keep the API the same, just move the point of indirection.)
In some cases where functions are called infrequently, it can be faster to have a conditional check of an empty pointer, then double dereferencing. (In the best case, this gives you a 200% speed increase.)
This last tricks strategically increases the level of indirection for performance, but it should be avoided for frequent calls.
MyObject& obj; MyObject* ptr; // Profile this: obj.doOptionalStuff(); // Against this: if (ptr != NULL) { ptr->doStuff(); }
3) Consider Memory Allocation
The performance hit for indirection is much lower when your objects are in the cache. You want to be allocating memory as wisely as possible. Objects that are called nearby in the code should be together in memory.
Preventing excessive cache misses isn’t always straightforward for logic built as a tree or graph. You’ll need to experiment with different strategies, but just having your own allocators for small virtual objects is a good start. Specifically, keep all the logic for actor AI in a small and well identified block of memory.
4) Use Compile-Time Indirection
In many cases, indirection is only necessary for ease of development. If that’s the case, you probably don’t need runtime indirection, and you can optimize it out using C++ tricks.
Consider using a template method pattern, where the implementation of the algorithm is a C++ template. This allows the compiler to optimize calls to functions. On the down side, it can bloat the code and it’s more awkward to write.
template
class MyAlgorithm : public BASE { void process() { // Non-virtual call can be optimized. while (BASE::step()) { } } }
Alternatively, consider helper classes in the form of the curiously recurring template pattern (CRTP). This the base class to access member functions of the derived, which helps optimizations.
5) Reduce Indirection
For certain functions types, it may also be faster to store a fast delegate. If a delegate points to a non-virtual class, this can be optimized better by certain compilers. Sadly, however, this varies greatly from one compiler to another.
// Self-contained member function pointer. using namespace fastdelegate; typedef FastDelegate
()> ProcessStuff;
Summary
Now I’m supposed to say that premature optimization is the root of all evil. That may well be true, but if you don’t establish an efficient framework upfront, once you’re in production it’s usually too late to make drastic changes.
Professional AI developers take a few years and multiple attempts to get it right, just like dynamic programming mature and become more efficient… so do your homework and stay ahead of the pack!
Do you have any tricks to optimize the indirections in your AI logic?
Although I agree with the moto "early optimization is the root of all evil" I will also say that it's important to help the compiler understand what you want, so that he will the one to do the evil part ;) (or at least some of it)
Anyway, here are some tricks I find useful :
- avoid pointer to functions. They cannot be optimized so it's better to call directly the function than passing pointers to function when possible.
(stl::for_each is a good exemple of things you should try to avoid since it uses a functor, and most of the time it's easier to loop on each object)
- use the const keyword. Some compilers (on some architectures) treat variable differently if they are const or not, thus it helps them optimizing register use.
- use the const keyword on functions. It helps the compiler detect if an object is modified or not, and it may be able to optimize generated code.
- prefer :
unsigned int const vect_size = myvect.size();
for (unsigned int index = 0; index < vect_size; ++index)
instead of :
for (unsigned int index = 0; index < myvect.size(); ++index)
which can generate calls to size at each iteration (in case vect has been modified during the loop)
- do not mix inline a virtual calls
- try to avoid virtual methods in little objects, since there is an overhead for the vtable.
From my experience, a lot of interfaces are created because of cross platform objectives.
As an example: you have a FileSystem class that will abstract platform specific operations, and two platforms Win32 and Linux. No need to create an interface and make all methods go through a vtable indirection.
From my point of view interfaces (virtual methods and such) are ONLY needed for run-time polymorphism. If during your execution all pointer to some interface points to the same kind of objects you probably do not want to use an interface.
First check if policies are a good solution (see "Modern C++ design" by Andrei Alexandrescu), which is Alex's fourth point.
Policies offers something great: users are able to change the implementation.
Sometimes this is not what you want.
For example, you may want to totally hide the internals from your users.
This means they do not provide any template parameter( and do not need to rebuild if they use a provided typedef and you changed the implementation).
This is where the pimpl idiom comes into play.
By using a class wrapping an implementation pointer, you can provide your user a fully static interface with no virtual methods, and most of the time methods prone to inlining (thus no indirection at all).
Also this can help reducing build time, since nobody except the wrapper class need to know the implementation : you do not need to rebuild every source using your implementation anymore.
Anyway, here are another two:
[*]Use simple data-types as return values if necessary (like integers, booleans, floats, etc.)
[*]Keep the number of input parameters short (1 or 2 maximum) and keep them simple, in the same way as above.
These tricks allow the compiler to use registers instead of the stack to pass around values, so it reduces the overhead of any function call.
Alex
There is no difference in calling a virtual function than calling a function through the pimpl idiom. They both require an extra level of indirection.
how compiler will decide that the object of this class will not be created in terms of memory
Cheers,
Bjoern
|
http://aigamedev.com/open/article/optimize-virtual-functions/
|
crawl-002
|
refinedweb
| 1,271
| 53.51
|
New deadline for employers to issue Form 16 to employees is July 31, 2021 and the individual taxpayer can file the Income Tax Return till September 30, 2021.
The last date to file income tax return (ITR) for the financial year 2020-21 has been extended till September 30,2021. The ITR filing last date is generally July 31 of the year till which the returns can be filed for the relevant assessment year. “As per Income Tax Act, Due date for filing Income Tax Return of Individual Taxpayers is 31st July. For FY 20-21, CBDT has extended the due date to file Income Tax Return of Individual Taxpayer to September 30, 2021 and now the Individual Taxpayer can file his Income Tax Return till 30th September,” says Sujit Bangar, Founder Taxbuddy.com.
Salaried employees get Form 16 from their employers which help them in filing the ITR. However, the last date by which the employers can provide Form 16 to their employees has also been extended. “As per the Income Tax Act, employers should issue Form-16 to the employee before 15th June every year after the end of the Financial Year. For FY 20-21, CBDT has extended the due date to issue Form 16. New deadline for employer to issue Form 16 to employee is July 31, 2021,” informs Bangar.
The tax at source is already deducted by the employer before crediting the bank account of the employee with the monthly salary. Form 16 is a summary of the total amount paid and is basically the TDS certificate for the employee. Form 16 includes Income chargeable under the head ‘Salaries’, any other income reported by employee, the various deductions under Chapter VI-A such as section 80C, Section 80D etc.
But, what if someone wants to file the ITR for assessment year 2021-22 without having received the Form 16? “If the employer fails to provide form 16 to the employee and the employee wants to file his income tax return, then the taxpayer can use his monthly payslip for calculating his salary income for the financial year. Taxpayers can take all 12 months salary slip as the basis for the calculation of his Salary Income,” says Bangar.
Also, many taxpayers want to file ITR especially those who have TDS to show and no other income. “If your total income is less than basic exemption limit i.e. Rs 2.5 Lakh and TDS has been done, you shall file income tax return. Filing ITR will ensure receipt of refund of TDS done. Remember , for filing ITR, Form 16 is not mandatory. You can find your TDS details in 26AS. In the new income tax portal, TDS details are easily available on the dashboard,” informs Bangar.
And, when it comes to opting for the new tax regime or continuing with the old tax regime, your Form 16 will show the option chosen. This has been provided by bringing a change in the format of Form 16. “As per the notification only one change is notified in form 16 which is in Part B of Form 16. As per the new change, employers need to report whether the employee opted for the New Tax Regime,” informs Bang.
|
https://www.financialexpress.com/money/income-tax/last-date-for-issuing-form-16-extended-till-july-31-heres-how-you-may-still-file-itr-for-ay-2021-22/2284336/
|
CC-MAIN-2021-31
|
refinedweb
| 540
| 60.04
|
we will learn how to set working directory in python. We will also learn to get the current working directory in Python. Let’s discuss how to get and set working directory in python with an example for each.
Get Current Working directory in python:
import os os.getcwd()
The above code gets the current working directory so the output will be
D:\Public\R SAS PGSQL\Python\Python Tutorial\
Set working directory in python:
import os os.chdir("D:\Public\Python\")
or
import os os.chdir("D:/Public/Python/")
So the working directory will be set to the above mentioned path.
|
https://www.datasciencemadesimple.com/get-set-working-directory-python-2/
|
CC-MAIN-2021-17
|
refinedweb
| 102
| 58.69
|
Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell 3.0 and the CIM cmdlets to explore WMI classes.
Microsoft Scripting Guy, Ed Wilson, is here. Today, the Scripting Wife and I are on the train from Dortmund to Geneva. Teresa was really smart and found tickets for us to tour the Particle Accelerator at Cern. This is a nice train ride, and in addition to being a great place to work, the scenery outside is wonderful as well.
While in Dortmund, we had time to spend the day with Windows PowerShell Guru Klaus Schulte. In addition to talking about Windows PowerShell, we walked around Dortmund taking pictures. Here is a photo of the Scripting Wife, Klaus, and his wife, Susanne, during our walk.
One of the cool things about Dortmund is they have one of the nicest burger places I have ever seen; not that we ate there, but it just looked cool, as shown here.
One of the problems with traveling on a train is how expensive wireless Internet is, and I just do not want to pay that much. This means I am limited to resources on my laptop. In the old days, I used to install the WMI SDK on my laptop so that I would have ready information about the classes. These days the WMI SDK is incorporated with the Windows Platform SDK (or maybe we call it something else), and this takes a whole lot of disk space that I really do not have. Luckily, I do not need to invest this much disk space to just find out things like writable properties or implemented methods of WMI classes. The Get-CIMClass cmdlet does an excellent job of exposing just the information I need.
Many WMI classes have writable properties. To see these, I need to look for the write qualifier. Well, I do not exactly have to do this—I can use the WbemTest utility to find this information. So, I type WbemTest in my Windows PowerShell console. When WbemTest opens, I have to click connect to connect to the root/cimv2 WMI namespace. Once this connects, I can now click Open Class and type Win32_Computersystem, for example. Now I have the Object Editor for Win32_ComputerSystem. I have to look at every single property individually to see the qualifiers. If a property does not possess the write qualifier, it is read-only, as shown here.
Using the old Get-WmiObject cmdlets and piping the results to Get-Member is misleading because it reports all properties as Get;Set (but I know from WbemTest that the name property is read-only). This is shown here.
PS C:\> Get-WmiObject win32_computersystem | get-member -Name name
TypeName: System.Management.ManagementObject#root\cimv2\Win32_ComputerSystem
Name MemberType Definition
---- ---------- ----------
Name Property string Name {get;set;}
The easy way to find WMI class information is to use the Get-CimClass cmdlet. For example, to see all of the writable properties from a WMI class, such as the Win32_ComputerSystem WMI class, I can use the following code.
Get-CimClass win32_computersystem | select -ExpandProperty cimclassproperties |
where qualifiers -match 'write' | Format-table name, qualifiers
The command and associated output is shown in the following image.
In the same way that I can find writable WMI properties, I can also find implemented WMI methods. A WMI class may have five methods and only three of them might have the implemented qualifier. This is because all WMI classes inherit from other WMI classes that inherit from other WMI classes (and so on and so on). Therefore, an abstract may have a SetPowerState method, but the dynamic WMI class I want to use may not implement the method (indeed, I know of no WMI class that implements the SetPowerState method).
So, in the same way I looked for the write qualifier for cimclassproperties, I look for the implemented qualifier for the cimclassmethods property, as shown here.
Get-CimClass win32_computersystem | select -ExpandProperty cimclassmethods |
where qualifiers -match 'implemented' | Format-Table name, qualifiers
The code to look at the implemented methods of the Win32_ComputerSystem WMI class (along with the associated results) is shown here.
Well, the Scripting Wife just came back to our seats with tea and chocolate (I did not ask her to find a snack, she just did it because she is nice). I guess I need to put the laptop up for a bit. Join me tomorrow when I will talk about using Windows PowerShell and WMI to view or to set power plans. It resource.
@Serhad MAKBULOGLU thank you. This is one of the reasons that I write the articles, to make the information available to everyone. Indeed, sometimes I put stuff in the articles I know I will need to have later :-) So I will know where to find things.
|
http://blogs.technet.com/b/heyscriptingguy/archive/2012/11/26/use-powershell-3-0-and-the-cim-cmdlets-to-explore-wmi-classes.aspx
|
CC-MAIN-2013-48
|
refinedweb
| 797
| 61.67
|
This page describes useful app settings in the module-level
build.gradle file.
In addition to giving an overview of
important properties set in the
build.gradle file, learn how to:
- Change the application ID for different build configurations.
- Safely adjust the namespace independent of the application ID.
Set the application ID
Every Android app has a unique application ID that looks like a Java or Kotlin package name, such as com.example.myapp. This ID uniquely identifies your app on the device and in the Google Play Store.
Your application ID is defined" } ... }
Although the application ID looks like a traditional Java or Kotlin package name, package name you chose during setup. You
can technically toggle the two properties independently from then on, but it
is not recommended.
It is recommended that you do the following when setting the application ID:
- Keep the application ID the same as the namespace. The distinction between the the two properties can be a bit confusing, but if you keep them same, you have nothing to worry about.
- Don't change the application ID after you publish your app. If you change it, Google Play Store treats the subsequent upload as a new app.
- Explicitly define the application ID. If the application ID is not explicitly defined using the
applicationIdproperty, it automatically takes on the same value as the namespace. This means that changing the namespace changes the application ID, which is usually not what you want..
Set the namespace
Every Android module has a namespace, which is used as the Java or Kotlin
package name for
its generated
R and
BuildConfig classes.
Your namespace is defined by the
namespace property in your module's
build.gradle file, as shown in the following code snippet. The
namespace is
initially set to the package name you choose when you create your
project.
android { namespace 'com.example.myapp' ... }
While building your app into the final application package (APK), the Android
build tools use the namespace as the namespace for your app's generated
R
class, which is used to access your
app resources.
For example, in the preceding build file, the
R class is created at
com.example.myapp.R.
The name you set for the
build.gradle file's
namespace property
should always match your project's base package name, where you keep your
activities and other app code. You can have other sub-packages in
your project, but those files must import the
R class using the
namespace from the
namespace property.
For a simpler workflow, keep your namespace the same as your application ID, as they are by default.
Change the namespace
In most cases, you should keep the namespace and application ID the same, as they are by default. However, you may need to change the namespace at some point if you're reorganizing your code or to avoid namespace collisions.
In these cases,, the build tools copy
the application ID into your app's final manifest file at the end of the build.
So if you inspect your
AndroidManifest.xml file after a build,
the
package attribute is set to the
application ID. The merged manifest's
package attribute is where the: Don't set
testNamespace and
namespace to the same value, otherwise namespace
collisions occur.
To learn more about testing, see Test apps on Android.
|
https://developer.android.com/static/studio/build/configure-app-module?hl=fa
|
CC-MAIN-2022-40
|
refinedweb
| 556
| 64
|
HTTP client¶
The first code example is the simplest thing you can do with the
cpp-netlib. The application is a simple HTTP client, which can
be found in the subdirectory
libs/network/example/http_client.cpp.
All this example doing is creating and sending an HTTP request to a server
and printing the response body.
The code¶
Without further ado, the code to do this is as follows:
#include <boost/network/protocol/http/client.hpp> #include <iostream> int main(int argc, char *argv[]) { using namespace boost::network; if (argc != 2) { std::cout << "Usage: " << argv[0] << " [url]" << std::endl; return 1; } http::client client; http::client::request request(argv[1]); request << header("Connection", "close"); http::client::response response = client.get(request); std::cout << body(response) << std::endl; return 0; }
Running the example¶
You can then run this to get the Boost website:
$ cd ~/cpp-netlib-build $ make http_client $ ./example/http_client
Note
The instructions for all these examples assume that
cpp-netlib is build outside the source tree,
according to CMake conventions. For the sake of
consistency we assume that this is in the
~/cpp-netlib-build directory.
Diving into the code¶
Since this is the first example, each line will be presented and explained in detail.
#include <boost/network/protocol/http/client.hpp>
All the code needed for the HTTP client resides in this header.
http::client client;
First we create a
client object. The
client abstracts all the
connection and protocol logic. The default HTTP client is version
1.1, as specified in RFC 2616.
http::client::request request(argv[1]);
Next, we create a
request object, with a URI string passed as a
constructor argument.
request << header("Connection", "close");
cpp-netlib makes use of stream syntax and directives to allow
developers to build complex message structures with greater
flexibility and clarity. Here, we add the HTTP header “Connection:
close” to the request in order to signal that the connection will be
closed after the request has completed.
http::client::response response = client.get(request);
Once we’ve built the request, we then make an HTTP GET request
throught the
http::client from which an
http::response is
returned.
http::client supports all common HTTP methods: GET,
POST, HEAD, DELETE.
std::cout << body(response) << std::endl;
Finally, though we don’t do any error checking, the response body is
printed to the console using the
body directive.
That’s all there is to the HTTP client. In fact, it’s possible to compress this to a single line:
std::cout << body(http::client().get(http::request("")));
The next example will introduce the
uri class.
|
https://cpp-netlib.org/0.12.0/examples/http/http_client.html
|
CC-MAIN-2019-09
|
refinedweb
| 434
| 56.55
|
;20 21 /**22 * Implementation of FileNameMapper that always returns the source23 * file name without any leading directory information.24 *25 * <p>This is the default FileNameMapper for the copy and move26 * tasks if the flatten attribute has been set.</p>27 *28 */29 public class FlatFileNameMapper implements FileNameMapper {30 31 /**32 * Ignored.33 * @param from ignored.34 */35 public void setFrom(String from) {36 }37 38 /**39 * Ignored.40 * @param to ignored.41 */42 public void setTo(String to) {43 }44 45 /**46 * Returns an one-element array containing the source file name47 * without any leading directory information.48 * @param sourceFileName the name to map.49 * @return the file name in a one-element array.50 */51 public String [] mapFileName(String sourceFileName) {52 return new String [] {new java.io.File (sourceFileName).getName()};53 }54 }55
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/apache/tools/ant/util/FlatFileNameMapper.java.htm
|
CC-MAIN-2016-44
|
refinedweb
| 150
| 68.97
|
Situation:
Exchange 2010 & 2016 coexistence. Outlook Anywhere is enabled on both with NTLM
webmail.domain.com is configured as the CAS namespace (for all virtual directories on 2016)
autodiscover.domain.local/autodiscover.autodiscover.xml is configured as the SCP (external autodiscover is not used)
DNS points for both of the above URLs to Exchange 2016
When moving mailboxes from 2010 to 2016 the following happens:
For an Outlook 2010 user:
User gets a popup saying the administrator has made a change and Outlook needs to be restarted
User restarts Outlook (and sometimes gets the same popup again and then restarts Outlook again)
Users gets a credential popup. If you click cancel there is another popup which appears:
"Allow this website to configure test2010@domain.com server settings?"
autodiscover.domain.com/autodiscover/autodiscover.xml
As you can see, Outlook 2010 is looking for autodiscover.domain.com and that's probably because SCP lookup fails.
If I do an Outlook Autoconfiguration test, it fails. The SCP lookup keeps giving a 302 redirect.
This seems to be the issue described here:
I can do a recycle of those AppPools and then it does seem to work for some Outlook 2010 clients BUT almost every other Outlook 2010 client in the company (even from people who are not moved yet) will give a popup that the administrator has made a change and that Outlook needs to be restarted.
Then, for Outlook 2013 clients, recycling the AppPool doesn't work at all. They keep getting the credential popup. I can create a new profile for them and then Outlook connects but if I then restart Outlook again, there is again a popup for credentials.
Also, Test E-mail autoconfiguration works fine for them.
For those Outlook 2013 users, I tried adding the following registry key "MapiHttpDisabled" and then both their old and their new Outlook profile work without any popups! However, when I check the connection status, it still shows HTTP for the protocol, which means to me that they are still using MAPI over HTTP protocol, right?
Also, when I check the IIS Logs, I only see calls on MAPI protocol, even for those users where I added the registry key:
2017-06-08 09:27:25 10.132.33.12 POST /mapi/emsmdb/
For now we are adding the registry key for all the 2013 users as it seems to be a workaround but it doesn't feel like a good solution:
MapiHttp is the protocol of the future so I don't want to disable it
The registry doesn't seem to completely disable it because users still seem to be connected via MapiHttp (according to connection status and IIS logs)??? Does this key do something else as well?
It doesn't solve the problem of having the restart the AppPools for 2010 users.
The following are things I tried already:
Check OAB settings on the database: they are correctly configured
In IIS change Windows Authentication providers of the Autodiscover, EWS and OAB virtual directory to only NTLM (I also simply tried moving NTLM to the top and leaving Negotiate). I tried this on both servers
I enabled Kernel-mode authentication on Autodiscover virtual directory for Windows Authentication
I added the Negotiate:Kerberos provider to the mapi virtual directory on Exchange 2016
I added the autodiscover and webmail URL to the trusted sites in Internet Explorer
There is no proxy server enabled
None of these seems to make a difference. Sometimes, changing these settings gave popups for users that didn't have problems.
Any other suggestions?
|
https://serverfault.com/questions/854631/credential-pop-ups-after-moving-mailbox-from-2010-to-2016/854651
|
CC-MAIN-2019-22
|
refinedweb
| 594
| 57.2
|
PyFunge can be extended with various means. In particular, since PyFunge comes as a library you can experiment with them. (See Internals for extensive API documentation.)
You can write your own Funge-98 fingerprints and use them in PyFunge. Thanks to Python’s dynamic nature, PyFunge can directly load your fingerprint; even fingerprints shipped with PyFunge are dynamically loaded.
The typical Funge-98 fingerprint looks like this:
from funge.fingerprint import Fingerprint class HELO(Fingerprint): 'Prints "Hello, world!"' API = 'PyFunge v2' ID = 0x48454c4f @Fingerprint.register('P') def print_hello(self, ip): self.platform.putstr('Hello, world!\n') @Fingerprint.register('S') def store_hello(self, ip): ip.push_string('Hello, world!\n')
You can save this Python code as fp_HELO.py and load it with -f:
$ pyfunge -f HELO -v98 - "OLEH"4(PS>:#,_@ (EOF) Hello, world! Hello, world!
The name of Python code is not important, so you can change it to anything like fp_abracadabra.py and use correct option (in this case -f abracdabra). Even one module can contain several fingerprints. But by convention it uses same name with the ASCII representation of fingerprint ID, and only contains one fingerprint.
From now on this document assume that you are friendly in Python, or at least have written some programs in it.
Every fingerprint has a common base class: funge.fingerprint.Fingerprint. Actually this class provides nothing, besides from register decorator used to register new command. After the fingerprint module is imported PyFunge scans for a subclass of Fingerprint, and instantiates it when ( is executed.
Let’s analyze the HELO fingerprint above. It needs two attributes to function correctly.
In addition you can give a docstring to describe this fingerprint shortly. Of course that is optional, and will only be seen with --list-fprints.
Then it registers two commands: P and S. This command callback receives one parameter (not counting for self), ip. This object, an instance of funge.ip.IP class, exposes many methods and attributes:
Command callbacks are ordinary methods in the fingerprint class; the decorator, i.e. @Fingerprint.register(...), does register those methods for later use. The command can be two or more characters, in that case it registers many same commands:
@Fingerprint.register('0123456789') def push_number(self, ip): ip.push(ip.space.get(ip.position) - ord('0'))
Fingerprint class itself got many methods from the underlying semantics. For example, self.reflect(ip) will reflect the IP. (Actual method is in funge.languages.funge98.Unefunge98 — check it!) Also you can walk to next instruction, using self.walk(ip).
One last thing to note is a Vector class, since every coordinates in PyFunge is a vector. For example you can change the delta of IP to non-cardinal one:
@Fingerprint.register('K') def knight_walk(self, ip): import random if random.randint(0, 1): x, y = 1, 2 else: x, y = 2, 1 if random.randint(0, 1): x = -x if random.randint(0, 1): y = -y ip.delta = Vector.zero(ip.dimension).replace(_0=x, _1=y)
Since we deal not only with Befunge but Trefunge, we should build a generic vector. This won’t work in Unefunge, but you can add some sanity check for it:
@Fingerprint.register('K') def knight_walk(self, ip): # reflect in Unefunge. if ip.dimension < 2: self.reflect(ip) return # ...
The fingerprint class can have two special methods: init() and final(). These methods also receives the IP parameter, and are executed right after ( or ).
class USLS(Fingerprint): 'Some useless fingerprint without any command.' API = 'PyFunge v2'; ID = 0x55534c53 def init(self, ip): self.platform.putstr('Hey, you just loaded the useless fingerprint.\n') def final(self, ip): self.platform.putstr('Hey, you just unloaded the useless fingerprint.\n')
By default these methods register the commands to IP, so you may want to call the original methods in Fingerprint if you override them:
def init(self, ip): Fingerprint.init(self, ip) self.platform.putstr('Hey, you just loaded the useless fingerprint and ' '(possibly) some commands.\n')
If these methods raise the exception the loading or unloading rolls back and ( or ) reflects. But you still have to roll back your own changes, if any:
def init(self, ip): Fingerprint.init(self, ip) if self.some_check(): # check failed: rolls back and raise the exception. Fingerprint.final(self, ip) # unregisters already registered commands raise RuntimeError('check failed!')
Also note that these methods can be executed out of order, and it is possible that the command callback is called even after final() method is called. So work can be done in final() is in fact quite limited.
Sometimes your fingerprint needs to store some informations, like IP flags or call stack. Since Python is a dynamic language you are free to store them in any context, but you have to know where to store exactly.
If the information is only stored between the load and unload, you can just store it in the fingerprint class:
def init(self, ip): Fingerprint.init(self, ip) self.exoticflag = False @Fingerprint.register('X') def toggle_exotic(self, ip): self.exoticflag = not self.exoticflag
If the information is local to IP (but should be retained after unload), you can store it in the IP object. If the information is global you should store it in the Program object (ip.program). Since they are public objects, you have to use some unique prefix for the name.
def init(self, ip): Fingerprint.init(self, ip) # initialize default value if none. if not hasattr(ip, 'EXOT_exoticflag'): ip.EXOT_exoticflag = False if not hasattr(ip.program, 'EXOT_globalflag'): ip.program.EXOT_globalflag = False @Fingerprint.register('X') def toggle_exotic(self, ip): if ip.pop(): ip.program.EXOT_globalflag = not ip.program.EXOT_globalflag else: ip.EXOT_exoticflag = not ip.EXOT_exoticflag
In the any case, do not use the global variable besides from constants. It won’t work correctly.
|
http://packages.python.org/PyFunge/extending.html
|
crawl-003
|
refinedweb
| 960
| 53.17
|
The lambda expression was introduced in C++11 and convenient way of writing
[ capture_clause ] (parameters) -> return_type { lambda_body }
[]is the empty clause, which means that the lambda expression accesses no variables in the enclosing scope.
[&]means all variables that you refer to are captured by reference.
[=]means they are captured by value, e.g.,
[a, &b]captures
aby value and
bby reference.
int.
The
capture_clauseis also known as lambda-introducer. Learn more in the offical docs.
In this example, we sort a vector descendingly. By default, the
sort() function sorts ascendingly. However, we can pass a lambda expression as the third argument to the
sort() function to have our own custom way of comparing vector elements. The lambda expression has an empty capture clause that takes two parameters (
a and
b) and returns a boolean.
#include <iostream> #include <vector> #include <stdlib.h> /* srand, rand */ #include <algorithm> using namespace std; // Helper function to print a vector void printVector(vector<int> V) { for (int i=0; i < V.size(); i++) cout << V[i] << " "; cout << endl; } int main() { vector<int> V {3, 2, 5, 7, 1, 4, 6}; cout << "Before:\t"; printVector(V); // The third argument to the sort function is a lambda expression // that has an empty capture clause, has two parameters a and b, // and returns a boolean. The lambda body is just a > b in order // to sort the values descendingly. sort(V.begin(), V.end(), [](const int& a, const int& b) -> bool { return a > b; }); cout << "After:\t"; printVector(V); }
RELATED TAGS
View all Courses
|
https://www.educative.io/answers/lambda-expressions-in-cpp
|
CC-MAIN-2022-33
|
refinedweb
| 255
| 63.9
|
I have a python script which at one point is required to run a perl script, wait for it to finish, then continue.
As this case will only occur on a windows machine, I thought I could simply open a new cmd and run the perl script there, but I'm having difficulties to do so.
import os os.system("start /wait cmd /c {timeout 10}")
should open a new cmd and sleep for 10 seconds, but it closes right away. I don't want to put the perl script in position of the
timeout 10, as it is quite resource intensive. Another idea was to use a
subprocess with
call or
Popen and
wait.
perl_script = subprocess.call(['script.pl', params])
But I'm not sure waht would happen to the stdout of the perl script in such a case.
I know the location and the parameters of the perl script.
How can I run a perl script from my python script, print the output (a lot) and wait for it to finish?
edit:
As suggested by @rchang, I added the
subprocess with
communicate as following and it works just as intended.
import subprocess, sys perl = "C:\\perl\\bin\\perl.exe" perl_script "C:\\scripts\\perl\\flamethrower.pl" params = " --mount-doom-hot" pl_script = subprocess.Popen([perl, perl_script, params], stdout=sys.stdout) pl_script.communicate()
These are my first lines of perl, just a quick copy/past script to test this..
print "Hello Perld!\n"; sleep 10; print "Bye Perld!\n";
|
http://www.howtobuildsoftware.com/index.php/how-do/bCIQ/python-perl-subprocess-stdout-run-a-perl-script-from-my-python-script-print-the-output-and-wait-for-it-to-finish
|
CC-MAIN-2018-47
|
refinedweb
| 248
| 75.1
|
Win a copy of Barcodes with iOS this week in the iOS forum or Core Java for the Impatient in the Java 8 forum!
Originally posted by Jose Botella: In the exam there will not be questions about the garbage collection eligibility of objects either: a) pointed to by local variables that falls out scope or b) string objects computed from string literals Please trust me
Originally posted by Mellihoney Michael: At what point is the object anObj available for garbage collection. 01: public class Base{ 02: 03: private void test() { 04: 05: if(true) { 06: String anObj = "sample";//local variable 07: String locObj = anObj; 08: anObj.trim(); 09: anObj = null; 10: locObj.trim(); 11: } 12: } 13: 14: static public void main(String[] a) { 15: new Base().test(); 16: } 17: 18: } Select most appropriate answer a) After line 7 b) After line 8 c) After line 9 d) After line 10 e) After line 11 f) it is hard to say whether After line 11 or line 12 why the answer is e,not f!
|
http://www.coderanch.com/t/240898/java-programmer-SCJP/certification/local-variable
|
CC-MAIN-2015-11
|
refinedweb
| 175
| 59.98
|
Bio::MAGETAB::Util::Reader - A parser/validator for MAGE-TAB documents.
use Bio::MAGETAB::Util::Reader; my $reader = Bio::MAGETAB::Util::Reader->new({ idf => $idf, relaxed_parser => $is_relaxed, }); my $magetab = $reader->parse();
This is the main parsing and validation class which can be used to read a MAGE-TAB document into a set of Bio::MAGETAB classes for further manipulation.
A filesystem or URI path to the top-level IDF file describing the investigation. This attribute is *required*..
An optional namespace string to be used in object creation.
An optional authority string to be used in object creation..
A boolean value (default FALSE) indicating whether to skip parsing of Data Matrix files.).
A string representing the MAGE-TAB version used in the parsed document. This is populated by the parse() method.
Attempts to parse the full MAGE-TAB document, starting with the top-level IDF file, and returns the resulting Bio::MAGETAB container object in scalar context, or the top-level Bio::MAGETAB::Investigation object and container object in list context.
Tim F. Rayner <tfrayner@gmail.com>
This library is released under version 2 of the GNU General Public License (GPL).
|
http://search.cpan.org/~tfrayner/Bio-MAGETAB/lib/Bio/MAGETAB/Util/Reader.pm
|
CC-MAIN-2014-35
|
refinedweb
| 190
| 56.25
|
On Fri, Nov 27, 2020 at 08:32:04AM +1100, Cameron Simpson wrote:
On 27Nov2020 00:25, Steven D'Aprano steve@pearwood.info wrote:
Block scoping allows shadowing within a function.
Just to this: it needn't.
Yes, I'm aware of that, and discussed languages such as Java which prohibit name shadowing within a function.
Shadowing is a double-edged sword. It is sometimes useful, but often a source of hard to find bugs. One might argue that a language should allow *some* shadowing but not too much.
You could forbid shadowing of the _static_ outer scope easily enough at parse/compile time. That would prevent a certain class of easy misuse.
i = 9 { new scope here for "i" ==> parse/compile error, since "i" is in play }
Yes, that's precisely the sort of thing I discussed, and described as action-at-a-distance between two scopes, where the mere existence of a name in scope A prevents you from using the same name in scope B.
The problem is, if your inner block scope must not reuse variable names in the outer function scope, well, what's the advantage to making them seperate scopes?
Analogy: I think most of us would consider it *really weird* if this code was prohibited:
a = None
def func(): a = 1 # Local with the same name as the global prohibited.
True, it might save newbies who haven't learned about the global keyword from making a few mistakes, but that's a pretty small, and transient, benefit. (Most coders are newbies for, what, one or two percent of their active coding life?)
One possible advantage, I guess, is that if your language only runs the garbage collector when leaving a scope, adding extra scopes helps to encourage the timely collection of garbage. I don't think that's a big advantage to CPython with it's reference counting gc.
That said, there _are_ times I wish I could mark out the lifetime of a variable, akin to C level:
... i does not exist ... { int i; ... use i ... } ... i now unknown, use is an error ...
The nearest Python equivalent is:
i = blah() ... use i del i
which feels fragile - accidental assignment to "i" later is not forbidden.
Why do you care about the *name* "i" rather than whatever value is bound to that name?
I completely get the idea of caring about the lifetime of an object, e.g. I understand the need to garbage collect the exception object when leaving `except` blocks. (At least by default.)
But I don't get why I might care about the lifetime of a *name*.
try: ... except Exception as e: pass e # Name is unbound, for good reasons. e = None # But why should this be an error?
We don't generally take the position that reuse of a name in the same function is Considered Harmful, let alone *so harmful* that we need the compiler to protect us from doing so.
If I am *accidentally* reusing names, my code has much bigger problems than just the name re-use:
- my names are so generic and undescriptive that that can be re-used for unrelated purposes;
- and the function is so large and/or complicated that I don't notice when I am re-using a name.
Name re-use in the bad sense is a symptom of poor code, not a cause of it, and as such block scopes are covering up the problem.
(That's my opinionated opinion :-)
|
https://mail.python.org/archives/list/python-ideas@python.org/message/X3DCRCNH52KVWCOOPJNVI2RHJRCK3LI2/
|
CC-MAIN-2021-17
|
refinedweb
| 582
| 70.94
|
Episode #21: Python has a new star framework for RESTful APIs
Published Thurs, Apr 13, 2017, recorded Wed, Apr 12, 2017.
This episode has been sponsored by Rollbar. Get a special offer via
#1 Brian: profile and pstats — Performance Analysis
- Doug Hellman is working on the Python 3 MOTW series that was so successful for Python 2.
- Recent edition is profile and pstats, for profiling parts of your code you may have concerns with and finding out where the slow bits are.
#2 Michael: API Star by Tom Christie
- A smart Web API framework, designed for Python 3.
- A few things to try right away:
$ pip3 install apistar $ apistar new --template minimal $ apistar run $ apistar test
- API Star allows you to dynamically inject various information about the incoming request into your views using type annotation.
- e.g.
def show_query_params(query_params: http.QueryParams): return { 'params': dict(query_params) }
- You can instead set the status code or headers by annotating the view as returning a Response
def create_project() -> Response: ...
- Parameters are automatically passed into views from routes (annotations!):
def echo_username(user_id: int): return {'message': f'Welcome, user {user_id}!'}
- Performance: Faster than sanic!
#3 Brian: Yes, Python is Slow, and I Don’t Care
- Optimize for your most expensive resource. That’s YOU, not the computer.
- Choose a language/framework/architecture that helps you develop quickly (such as Python). Do not choose technologies simply because they are fast.
- When you do have performance issues: find your bottleneck
- Your bottleneck is most likely not CPU or Python itself.
- If Python is your bottleneck (you’ve already optimized algorithms/etc.), then move the hot-spot to Cython/C
- Go back to enjoying getting things done quickly
#4 Michael: A Quick Introduction: Hashing
- Article by Gerald Nash
- Hashing is a method of determining the equivalence of two chunks of data.
- A cryptographic hash function is an irreversible function that generates a unique string for any set of data.
- Example
import hashlib as hash sha = hash.sha256() # Insert the string we want to hash sha.update('Hello World!') # Print the hexadecimal format of the binary hash we just created print(sha.hexdigest()) # 4d3cf15aa67c88742e63918825f3c80f203f2bd59f399c81be4705a095c9fa0e
- Know when to choose “weak” hashes vs. strong ones
- Straight hashes are not enough for security (e.g. passwords). Use passlib and be done.
#5 Brian: Wedding at Scale: How I Used Twilio, Python and Google to Automate My Wedding
- gspread to access a google spreadsheet of guests and phone numbers
- SMS guests with twilio
- replies handled by a flask app
- gathered accept/decline/didn't reply statistics
- reminder texts
- food selections and replies and reminders, all handled by Python
# 6 Michael: python-alexa: A Python framework for Alexa Development
- by Neil Stewart
- Ordered an amazon assistant.
- Before it arrived, I had challenged myself to develop something for it
- Project: VoiceOps, interact with an AWS account, such as telling me how many running and stopped instances there is or what RDS databases are in an account
- Wanted a framework that would make Alexa development super easy.
- Decided a new framework was needed: python-alexa
- python-alexa on github
- echo shim for testing without hardware
Our news:
Michael: Just added full text search (including within videos) to Talk Python courses.
Brian: Netflix chaos engineering interview on Test & Code
|
https://pythonbytes.fm/episodes/show/21/python-has-a-new-star-framework-for-restful-apis
|
CC-MAIN-2018-30
|
refinedweb
| 542
| 61.87
|
TensorFlow feed_dict: Use feed_dict To Feed Values To TensorFlow Placeholders
TensorFlow feed_dict example: Use feed_dict to feed values to TensorFlow placeholders so that you don't run into the error that says you must feed a value for placeholder tensors
< > Code:
Transcript:
We import TensorFlow as tf.
import tensorflow as tf
We then print out what TensorFlow version we are using.
print(tf.__version__)
We are using TensorFlow 1.0.1.
To understand how to use feed_dict to feed values to TensorFlow placeholders, we’re going to create an example of adding three TensorFlow placeholders together.
First, we define our first TensorFlow placeholders with the data type being tf.float32.
placeholder_ex_one = tf.placeholder(tf.float32)
We have tf.placeholder(tf.float32).
We assign it to the Python variable, placeholder_ex_one.
Next, we define our second TensorFlow placeholder with the same dtype as the first one and assign it to placeholder_ex_two.
placeholder_ex_two = tf.placeholder(tf.float32)
Lastly, we define our third TensorFlow placeholder to be the same as the other two.
placeholder_ex_tre = tf.placeholder(tf.float32)
We assign it to the Python variable, placeholder_ex_tre.
Let’s try printing the first placeholder to see what we get.
print(placeholder_ex_one)
We see that it is a TensorFlow tensor, the name is Placeholder:0, and the data type is float32.
Next, let’s define the addition using the tf.add_n operation because we’re adding more than two tensors at the same time.
placeholder_summation = tf.add_n([placeholder_ex_one, placeholder_ex_two, placeholder_ex_tre])
So we say tf.add_n.
We use a Python list to pass in our three placeholder variables – placeholder_ex_one, placeholder_ex_two, placeholder_ex_tre.
We assign that to the Python variable placeholder_summation.
Let’s try printing the addition operation to see what we get.
print(placeholder_summation)
So print(placeholder_summation).
We see that it is a tensor.
The name is AddN:0 and the data type is float32.
Now that we have created our TensorFlow placeholders in an addition operation, it’s time to run the computational graph.
We launch the graph in a session.
sess = tf.Session()
Then we initialize all the global variables in the graph.
sess.run(tf.global_variables_initializer())
Now that the variables have been initialized and the TensorFlow session has been created, let’s try printing the result of running our first placeholder variable.
print(sess.run(placeholder_ex_one))
So we do print(sess.run(placeholder_ex_one)).
Yikes!
We get an error that says, "You must feed a value for the placeholder tensor 'Placeholder' with dtype float."
While we’re at it, let’s also try printing our operation to see what happens.
print(sess.run(placeholder_summation))
So print(sess.run(placeholder_summation)).
We again get the same error, "You must feed a value for the placeholder tensor 'Placeholder' with dtype float."
So this is what we set out to do with this video – use feed_dict to feed values to the TensorFlow placeholders.
The way to feed the values into our tensors is to use a feed_dict that defines the values that we want.
To do this, we can write the following.
print(sess.run(placeholder_summation, feed_dict={placeholder_ex_one: 10, placeholder_ex_two: 20, placeholder_ex_tre: 30}))
Again, we’re using the print(sess.run(...)) and we’re going to use the placeholder_summation operation and we’re going to pass into it the feed_dict that says placeholder_ex_one is going to be the value of 10, placeholder_ex_two is going to be the value of 20, placeholder_ex_tre is going to be the value of 30.
When we run that, we see that we have the value of 60.0 because we have float32 numbers.
Finally, we close the TensorFlow session to release the TensorFlow resources we used within the session.
sess.close()
That is how you can use the feed_dict to feed values to TensorFlow placeholders.
|
https://aiworkbox.com/lessons/use-feed_dict-to-feed-values-to-tensorflow-placeholders
|
CC-MAIN-2019-51
|
refinedweb
| 620
| 56.96
|
Each piece of advice is of the form
[ strictfp ] AdviceSpec [ throws TypeList ] : Pointcut { Body }where AdviceSpec is one of
and where Formal refers to a variable binding like those used for method parameters, of the form Type Variable-Name, and Formals refers to a comma-delimited list of Formal.
Advice defines crosscutting behavior. It is defined in terms of pointcuts. The code of a piece of advice runs at every join point picked out by its pointcut. Exactly how the code runs depends on the kind of advice.
AspectJ supports three kinds of advice. The kind of advice determines how it interacts with the join points it is defined over. Thus AspectJ divides advice into that which runs before its join points, that which runs after its join points, and that which runs in place of (or "around") its join points.
While before advice is relatively unproblematic, there can be three interpretations of after advice: After the execution of a join point completes normally, after it throws an exception, or after it does either one. AspectJ allows after advice for any of these situations.
aspect A { pointcut publicCall(): call(public Object *(..)); after() returning (Object o): publicCall() { System.out.println("Returned normally with " + o); } after() throwing (Exception e): publicCall() { System.out.println("Threw an exception: " + e); } after(): publicCall(){ System.out.println("Returned or threw an Exception"); } }
After returning advice may not care about its returned object, in which case it may be written
after() returning: call(public Object *(..)) { System.out.println("Returned normally"); }
If after returning does expose its returned object, then the type of the parameter is considered to be an instanceof-like constraint on the advice: it will run only when the return value is of the appropriate type.
A value is of the appropriate type if it would be assignable to a variable of that type, in the Java sense. That is, a byte value is assignable to a short parameter but not vice-versa, an int is assignable to a float parameter, boolean values are only assignable to boolean parameters, and reference types work by instanceof.
There are two special cases: If the exposed value is typed to Object, then the advice is not constrained by that type: the actual return value is converted to an object type for the body of the advice: int values are represented as java.lang.Integer objects, etc, and no value (from void methods, for example) is represented as null.
Secondly, the null value is assignable to a parameter T if the join point could return something of type T.
Around advice runs in place of the join point it operates over, rather than before or after it. Because around is allowed to return a value, it must be declared with a return type, like a method.
Thus, a simple use of around advice is to make a particular method constant:
aspect A { int around(): call(int C.foo()) { return 3; } }
Within the body of around advice, though, the computation of the original join point can be executed with the special syntax
proceed( ... )
The proceed form takes as arguments the context exposed by the around's pointcut, and returns whatever the around is declared to return. So the following around advice will double the second argument to foo whenever it is called, and then halve its result:
aspect A { int around(int i): call(int C.foo(Object, int)) && args(i) { int newi = proceed(i*2) return newi/2; } }
If the return value of around advice is typed to Object, then the result of proceed is converted to an object representation, even if it is originally a primitive value. And when the advice returns an Object value, that value is converted back to whatever representation it was originally. So another way to write the doubling and halving advice is:
aspect A { Object around(int i): call(int C.foo(Object, int)) && args(i) { Integer newi = (Integer) proceed(i*2) return new Integer(newi.intValue() / 2); } }
Any occurence of proceed(..) within the body of around advice is treated as the special proceed form (even if the aspect defines a method named proceed), unless a target other than the aspect instance is specified as the recipient of the call. For example, in the following program the first call to proceed will be treated as a method call to the ICanProceed instance, whereas the second call to proceed is treated as the special proceed form.
aspect A { Object around(ICanProceed canProceed) : execution(* *(..)) && this(canProceed) { canProceed.proceed(); // a method call return proceed(canProceed); // the special proceed form } private Object proceed(ICanProceed canProceed) { // this method cannot be called from inside the body of around advice in // the aspect } }
In all kinds of advice, the parameters of the advice behave exactly like method parameters. In particular, assigning to any parameter affects only the value of the parameter, not the value that it came from. This means that
aspect A { after() returning (int i): call(int C.foo()) { i = i * 2; } }
will not double the returned value of the advice. Rather, it will double the local parameter. Changing the values of parameters or return values of join points can be done by using around advice..
The strictfp modifier is the only modifier allowed on advice, and it has the effect of making all floating-point expressions within the advice be FP-strict.
An advice declaration must include a throws clause listing the checked exceptions the body may throw. This list of checked exceptions must be compatible with each target join point of the advice, or an error is signalled by the compiler.
For example, in the following declarations:
import java.io.FileNotFoundException; class C { int i; int getI() { return i; } } aspect A { before(): get(int C.i) { throw new FileNotFoundException(); } before() throws FileNotFoundException: get(int C.i) { throw new FileNotFoundException(); } }
both pieces of advice are illegal. The first because the body throws an undeclared checked exception, and the second because field get join points cannot throw FileNotFoundExceptions.
The exceptions that each kind of join point in AspectJ may throw are:
Multiple pieces of advice may apply to the same join point. In such cases, the resolution order of the advice is based on advice precedence.
There are a number of rules that determine whether a particular piece of advice has precedence over another when they advise the same join point.
If the two pieces of advice are defined in different aspects, then there are three cases:
If the two pieces of advice are defined in the same aspect, then there are two cases:
These rules can lead to circularity, such as
aspect A { before(): execution(void main(String[] args)) {} after(): execution(void main(String[] args)) {} before(): execution(void main(String[] args)) {} }
such circularities will result in errors signalled by the compiler.
At a particular join point, advice is ordered by precedence.
A piece of around advice controls whether advice of lower precedence will run by calling proceed. The call to proceed will run the advice with next precedence, or the computation under the join point if there is no further advice.
A piece of before advice can prevent advice of lower precedence from running by throwing an exception. If it returns normally, however, then the advice of the next precedence, or the computation under the join pint if there is no further advice, will run.
Running after returning advice will run the advice of next precedence, or the computation under the join point if there is no further advice. Then, if that computation returned normally, the body of the advice will run.
Running after throwing advice will run the advice of next precedence, or the computation under the join point if there is no further advice. Then, if that computation threw an exception of an appropriate type, the body of the advice will run.
Running after advice will run the advice of next precedence, or the computation under the join point if there is no further advice. Then the body of the advice will run.
Three special variables are visible within bodies of advice and within if() pointcut expressions: thisJoinPoint, thisJoinPointStaticPart, and thisEnclosingJoinPointStaticPart. Each is bound to an object that encapsulates some of the context of the advice's current or enclosing join point. These variables exist because some pointcuts may pick out very large collections of join points. For example, the pointcut
pointcut publicCall(): call(public * *(..));
picks out calls to many methods. Yet the body of advice over this pointcut may wish to have access to the method name or parameters of a particular join point.
thisJoinPoint is bound to a complete join point object.
thisJoinPointStaticPart is bound to a part of the join point object that includes less information, but for which no memory allocation is required on each execution of the advice. It is equivalent to thisJoinPoint.getStaticPart().
thisEnclosingJoinPointStaticPart is bound to the static part of the join point enclosing the current join point. Only the static part of this enclosing join point is available through this mechanism.
Standard Java reflection uses objects from the java.lang.reflect hierarchy to build up its reflective objects. Similarly, AspectJ join point objects have types in a type hierarchy. The type of objects bound to thisJoinPoint is org.aspectj.lang.JoinPoint, while thisStaticJoinPoint is bound to objects of interface type org.aspectj.lang.JoinPoint.StaticPart.
|
http://www.eclipse.org/aspectj/doc/released/progguide/semantics-advice.html
|
CC-MAIN-2015-22
|
refinedweb
| 1,557
| 52.6
|
Up to [DragonFly] / src / sys / sys
Request diff between arbitrary revisions
Keyword substitution: kv
Default branch: MAIN
AMD64 Support: * Add an identifier to SYSINIT structures for debugging purposes. Submitted-by: Jordan Gordeev <jgordeev@dir.bg> Obtained-from: FreeBSD, with modifications
Synchronize some of the machine-independant AMD64 bits. Obtained-from: Jordan Gordeev <jgordeev@dir.bg>
Add an ordering field to the interrupt config hook structure and adjust CAM to place its config last.
Move clock registration from before SMP startup to after. APIC_IO builds need the ioapic interrupt routing information from the mptable scan..
Remove an #ifdef _KERNEL inside an #ifdef _KERNEL.
Rename printf -> kprintf in sys/ and add some defines where necessary (files which are used in userland, too).
Rename functions to avoid conflicts with libc.
Bring in the initial cut of the Cache Coherency Management System module. Add a sysctl kern.ccms_enable for testing. CCMS operations are disabled by default. The comment below describes the whole enchillada. Only basic locking has been implemented in this commit. CCMS is a duel-purpose cache management layer based around offset ranges. #1 - Threads on the local machine can obtain shared, exclusive, and modifying range locks. These work kinda like lockf locks and the kernel will use them to enforce UNIX I/O atomicy rules. #2 - The BUF/BIO/VM system can manage the cache coherency state for offset ranges. That is, Modified/Exclusive/Shared/Invalid (and two more advanced states). These cache states to not represent the state of data we have cached. Instead they represent the best case state of data we are allowed to cache within the range. The cache state for a single machine (i.e. no cluster), for every CCMS data set, would simply be 'Exclusive' or 'Modified' for the entire 64 bit offset range. The way this works in general is that the locking layer is used to enforce UNIX I/O atomicy rules locally and to generally control access on the local machine. The cache coherency layer would maintain the cache state for the object's entire offset range. The local locking layer would be used to prevent demotion of the underlying cache state, and modifications to the cache state might have the side effect of communicating with other machines in the cluster. Take a typical write(). The offset range in the file would first be locked, then the underlying cache coherency state would be upgraded to Modified. If the underlying cache state is not compatible with the desired cache state then communication might occur with other nodes in the cluster in order to gain exclusive access to the cache elements in question so they can be upgraded to the desired state. Once upgraded, the range lock prevents downgrading until the operation completes. This of course can result in a deadlock between machines and deadlocks would have to be dealt with. Likewise, if a remote machine needs to upgrade its representation of the cache state for a particular file it might have to communicate with us in order to downgrade our cache state. If a remote machine needs an offset range to be Shared then we have to downgrade our cache state for that range to Shared or Invalid. This might have side effects on us such as causing any dirty buffers or VM pages to be flushed to disk. If the remote machine needs to upgrade its cache state to Exclusive then we have to downgrade ours to Invalid, resulting in a flush and discard of the related buffers and VM pages. Both range locks and range-based cache state is stored using a common structure called a CST, in a red-black tree. All operations are approximately N*LOG(N). CCMS uses a far superior algorithm to the one that the POSIX locking code (lockf) has to use. It is important to note that layer #2 cache state is fairly persistent while layer #1 locks tend to be ephermal. To prevent too much fragmentation of the data space the cache state for adjacent elements may have to be actively merged (either upgraded or downgraded to match). The buffer cache and VM page caches are naturally fragmentory, but we really do not want the CCMS representation to be too fragmented. This also gives us the opportunity to predispose our CCMS cache state so I/O operations done on the local machine are not likely to require communication with other hosts in the cluster. The cache state as stored in CCMS is a superset of the actual buffers and VM pages cached on the local machine.>>.
MFC the new random number generator entropy code from the commit on "2006/01/25 11:56:31 PST".@fs.ei.tum.de>>
spl->critical section conversion, plus remove some macros which are now unused due the removal of spls.
basetime should be static, all access done via PKI.
Simplified NTP kernel interface: - kern.ntp.delta gives the delta to apply (ns) - kern.ntp.tick_delta is the correction applied in each tick (ns) - kern.ntp.default_tick_delta is the default correction for each tick (ns) - kern.ntp.big_delta is the threshold for kern.ntp.delta to use 10x kern.ntp.default_tick_delta, not the normal value - kern.ntp.adjust can be used to change the current value of kern.ntp.delta relatively. - kern.ntp.next_leaf_second specifies the time_t of the next leaf second change, kern.ntp.insert_leaf_second != 0 means a leaf second is inserted, otherwise it is removed All ntp_* variables are manipulated on CPU #0 with the exception of set_timeofday. It just sets ntp_delta to 0, which is fine for the moment.
Fix order for SI_SUB_TUNABLES..
-]
Add SI_SUB_LOCK as sysinit priority for the initialisation of tokens and lockmgr locks. This priority should not be abused, since it is higher then SI_SUB_VM.
Remove parameter names. Submitted by Chris Pressey <cpressey@catseye.mine.nu>.
Properly spell compatible and compatibility..
Generalise, and remove SI_SUB_VINUM; use SI_SUB_RAID instead.
Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most ids have been removed from !lint sections and moved into comment sections.
import from FreeBSD RELENG_4 1.63.2.9
|
http://www.dragonflybsd.org/cvsweb/src/sys/sys/kernel.h?f=h
|
CC-MAIN-2014-15
|
refinedweb
| 1,019
| 57.57
|
Controversial extension methods:
CastTo<T> and
As<T>
Raymond
You’ve probably had to do this in C#. You get an object, you need to cast it to some type
T and then fetch a property that is returned as an
object, so you have to cast that to some other type
U, so you can read the destination property.
For example, you have a
ComboBoxItem, and you put some extra data in the
Tag.
void AddComboBoxItem(Thing thing) { var item = new ComboBoxItem { Content = thing.Name, Tag = thing }; someComboBox.Items.Append(item); } void OnSelectionChanged(object sender, SelectionChangedEventArgs e) { var thing = (Thing)((ComboBoxItem)((ComboBox)sender).SelectedItem)?.Tag; ... }
In this case, when the selection changes, we ask the
ComboBox for its currently-selected item, cast it to a
ComboBoxItem, then get the
Tag from it, then cast the
Tag to the
Thing that we were after in the first place.
In order to parse that expression, your eyes have to bounce back and forth because the casts are on the left, but the method calls and property accesses are on the right.
// 6 4 2 1 3 5 var thing = (Thing)((ComboBoxItem)((ComboBox)sender).SelectedItem)?.Tag;
You also have to pay attention to the parentheses, or what’s more likely to be the case, you simply trust that the parentheses are in the right place.
Enter the controversial extension method
CastTo<T>.
namespace ObjectExtensions { static class ExtensionMethods { public static T CastTo<T>(this object o) => (T)o; } }
With this extension method, you can write your code as a straightforward left-to-right sequence.
void OnSelectionChanged(object sender, SelectionChangedEventArgs e) { var thing = sender.CastTo<ComboBox>().SelectedItem.CastTo<ComboBoxItem>()?.Tag.CastTo<Thing>(); ... }
You can break up the long line for readability, and the fact that there are no large spans of parentheses makes the line breaks easier to place.
void OnSelectionChanged(object sender, SelectionChangedEventArgs e) { var thing = sender .CastTo<ComboBox>() .SelectedItem .CastTo<ComboBoxItem>() ?.Tag .CastTo<Thing>(); ... }
Some people use the
as operator instead of a cast, not because they actually care about the failure case (in which the result of the
as is
null), but because it lets them write things left-to-right.
void OnSelectionChanged(object sender, SelectionChangedEventArgs e) { // 1 2 3 4 5 6 var thing = ((sender as ComboBox).SelectedItem as ComboBoxItem)?.Tag as Thing ... }
This lets you read from left to right, but you still have to mind your parentheses. It looks a little prettier, but it also makes debugging harder.
You can write a similar extension method for
as.
namespace ObjectExtensions { static class ExtensionMethods { public static T CastTo<T>(this object o) => (T)o; public static T As<T>(this object o) where T : class => o as T; } }
This lets you change the above to
void OnSelectionChanged(object sender, SelectionChangedEventArgs e) { var thing = sender .As<ComboBox>() .SelectedItem .As<ComboBoxItem>() ?.Tag .As<Thing>(); ... }
I suspect that like my crazy thread-switching tasks, people are going to think either that this is a really cool trick, or it’s an offense against nature.
Why not just make it multiple lines of code. I really don’t see why so many people are advocating for pre-obfuscating their source code into a single line.
When a line has a single cast or operation it is readable and much less likely to be mis-understood.
You should really be dealing with the Nothing/Null objects that may be returned at each step anyway as they probably have some meaning to be dealt with.
The downside of multiple lines is that you have to make up names for each of the intermediate results, and names suggest that the value may be needed again later in the method.
I’m a big fan of convenience extension methods and I’d love to have something like these as part of the language, but the issue I have with this particular implementation is that it behaves subtly differently from how you might expect, and as far as I can tell there’s no way to make it work “right”. One problem is that the extension method causes an implicit cast to object before doing the cast to the target type, so it bypasses all the compiler’s static type checking to verify that the cast is legal. The other is that this approach ignores any custom overloads of the casting operator that may be applicable.
I don’t think it’s possible to get a “correct” implementation of these extension methods without dedicated language support, unfortunately.
I agree: extension methods can be neat but those two are not a good idea for the reasons you have mentioned.
Also it’s not such a good idea to hide simple things in extension methods since it makes the code harder too read for others, especially when it turns out that it looks like simple casts but then actually may have unexpected behaviour vs. doing it the normal way.
You could solve this problem the same way Microsoft solved it for classes like Convert and BitConverter: Implement each object individually. Something like:
public static T CastTo(this SomeClass o) where T: SomeClass
Of course, this means that anyone who uses CastTo and has it “fail” needs to add new entries. A dirty rotten hassle.
For converting strings to other types, I have this extension method in my code. If the type parameter is a nullable type, it will return null if the conversion is not possible otherwise it just fails:
public static T ConvertTo(this string value)
{
var type = typeof(T);
if (type == typeof(string)) return (T)(object)value;
var typeConverter = System.ComponentModel.TypeDescriptor.GetConverter(Nullable.GetUnderlyingType(type) ?? type);
if (type.IsGenericType && type.GetGenericTypeDefinition() == typeof(Nullable))
{
if (string.IsNullOrWhiteSpace(value)) return default(T);
if (!typeConverter.IsValid(value)) return default(T);
}
return (T)typeConverter.ConvertFromString(value);
}
I have a few variations on this code: versions where you can specify a default value, use on an IEnumerable, etc. I end up calling this extension method a lot.
Count me in the “cool trick” camp, though I’d say it’s more “creating a fluent syntax for casts” than the pejorative “trick”. The left-to-right syntax, avoiding braces nesting, is a big improvement to readability in my opinion.
Seems like a design flaw to need the first two casts. The listener already knows that it is an object with a selection. That’s one flaw. Then that object should know that the selected item is of Item type and that the Item type has a Tag field.
This is a generic event raised by a control. The control doesn’t know what type of objects have been placed inside it, but the client does. Similarly, the item doesn’t know what the client put in the Tag, but the client knows.
Most of WinForms objects predate generics, hence needing to cast something you don’t know about (the Thing). For items the authors did know about, polymorphism still caused issues. ComboBox descends from ListBox (ListControl?) which provide most of the base functionality but cannot know what kind of items to store in it’s non-generic collection, hence Object. And here is the other problem, they were building using the System.Collections namespace objects which we have mostly abandoned for System.Collections.Generics.
The tag behavior is a carry over from Visual Basic’s ComboBox, also – which worked this same way.
And so with default settings in WinForms, this works from VB.NET 1.0:
Dim tag = ComboBox1.SelectedItem.Tag ‘ <= can throw an exception
And so you don't need the cast at all – with Option Strict on, it'll catch it – but then you can rewrite to:
Dim tag = ComboBox1.SelectedItem!Tag ' <== use late binding
And so you're covered! 😉
My argument would have been in .NET 2.0, adding generic versions of the WinForms controls would not have been difficult (could just wrap the non-Generic), and therefore they could have fixed this. Easy enough to have a ComboBox just like they have List but didn’t remove List, ArrayList, etc.
I like Raymond’s approach, but I’d actually – for combo boxes – move that expression into an extension:
var item = ComboBox1.GetSelectedItemAs();
var tag = ComboBox1.GetSelectedTagAs();
No reason to have the expression copied a hundred places in the code, lets make it easy to read if we’re defining an extension function anyway?
As I recall, “late binding” is not really what the VB.NET bang operator does. Rather, !Tag translates directly to .Item(“Tag”) (which is an early-bound call with a string argument).
Might look more at home in a huge chain of linq than a bunch of casts. Tho’ if at the start then maybe no big diff, especially if assigned to a var on the preceeding line.
I’ve always found it wildly inconsistent that the BCL team decided to add Cast() to LINQ, but having a similar method on T is somehow controversial.
Here’s an item I like use and share. Since an empty string often signals the same condition as a null, but cannot participate in null operators, I usually add this extension:
namespace Extensions
{
public static class StringExtensions
{
public static string NullIf(this string str) => string.IsNullOrWhitespace(str) ? null : str;
}
}
That allows this:
if (string.IsNullOrWhitespace(someString))
{
return “-No Entry-“;
}
else
{
return someString;
}
to become:
return someString.NullIfEmpty() ?? “-No Entry-“;
Here’s a way to write it using LINQ:
var thing = (
from s in new int[] { 0 }
let comboBox = sender as ComboBox
let selectedItem = comboBox.SelectedItem as ComboBoxItem
where selectedItem != null
let tag = selectedItem.Tag as Thing
select tag
).FirstOrDefault();
It’s easy to read, uses temporary variables so you can easily do null checks etc.
The only downside is having to create the initial array to enumerate.
A potential pitfall of the “CastTo” that is avoided by the “As” is that “CastTo” does not have the same semantics as the cast operation. Another comment noted that CastTo does not respect user-defined conversions. But also, because CastTo boxes value types, the unboxing cast is required to be representation-preserving! Casting, say, a double to int would work differently with CastTo and an old-fashioned cast to int. Of course “As” avoids this by not working on any value types.
The larger point here is: the cast operation is yet another unfortunate example of a feature found in C that is frankly not that great that has made it into C#. I have often pointed out that casting has multiple inconsistent meanings in C#; it means both “I know better than the compiler what the real type of this expression is; crash if I am wrong”, and also “the compiler knows the real type of this expression as well as I do, but I want a related value of a different type; figure out how to get it”. By trying to be all things at once, and having an ambiguous and over-parenthesized syntax, the whole thing becomes far more complicated and unwieldy than it needs to be.
|
https://devblogs.microsoft.com/oldnewthing/20191227-00/?p=103271
|
CC-MAIN-2021-10
|
refinedweb
| 1,836
| 62.38
|
Opened 6 months ago
Last modified 6 months ago
#29345 new Bug
Migrations that recreate constraints can fail on PostgreSQL if table is not in public schema
Description
The
django.db.backends.postgresql.introspection.get_constraints(...)-function contains an SQL expression that assumes that all tables are in the
public schema:
The last few lines read:
JOIN pg_class AS cl ON c.conrelid = cl.oid JOIN pg_namespace AS ns ON cl.relnamespace = ns.oid WHERE ns.nspname = %s AND cl.relname = %s """, ["public", table_name])
The result is that it fails to find any constraints for tables that are not in the
public schema. This either leaves us with two identical constraints, when it fails to delete the old, or results in an exception when it subsequently tries to recreate a constraint that it should have deleted:
django.db.utils.ProgrammingError: constraint "migration_app_testref_test_id_bce0807a_fk" for relation "migration_app_testref" already exists
A simple fix is to not check for the
public schema, but instead check visibility using
pg_catalog.pg_table_is_visible(cl.oid):
JOIN pg_class AS cl ON c.conrelid = cl.oid WHERE cl.relname = %s AND pg_catalog.pg_table_is_visible(cl.oid) """, ["public", table_name])
This appears to give the correct result, even when there are multiple tables with the same name in the database.
I have attached a migration file and models file for a simple app
migration_app that reproduces this problem. To be able to reproduce it, you must use a custom schema search path when connecting to PostgreSQL, either by setting it as the default for the role, or by specifying it in the connection options:
DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'migration_test', 'HOST': 'localhost', 'USER': 'postgres', 'OPTIONS': { 'options': '-c search_path=testschema,public', }, } }
Attachments (2)
Change History (3)
Changed 6 months ago by
Changed 6 months ago by
models.py for migration_app
Database migrations for
migration_app
|
https://code.djangoproject.com/ticket/29345
|
CC-MAIN-2018-43
|
refinedweb
| 302
| 54.22
|
Table of Contents
This document describes the shell coding style used for all the SMF script changes integrated into (Open)Solaris.
All new SMF shell code should conform to this coding standard, which is intended to match our existing C coding standard.
When in doubt, think "what would be the C-Style equivalent ?" and "What does the POSIX (shell) standard say ?"
Similar to cstyle, the basic format is that all lines are indented by TABs or eight spaces,
The encoding used for the shell scripts is either ASCII or UTF-8, alternative encodings are only allowed when the application requires this.
The proper interpreter magic for your shell script should be one of these:
#!/bin/sh Standard Bourne shell script #!/bin/ksh -p Standard Korn shell 88 script. You should always write ksh scripts with -p so that ${ENV} (if set by the user) is not sourced into your script by the shell. #!/bin/ksh93 Standard Korn shell 93 script (-p is not needed since ${ENV} is only used for interactive shell sessions).
Harden your script against unexpected (user) input, including command line options, filenames with blanks (or other special characters) in the name, or file input
Use builtin commands if the shell provides them. For example ksh93s+ (ksh93, version 's+') delivered with Solaris (as defined by PSARC 2006/550) supports the following builtins: basename, cat, chgrp, chmod, chown, cmp, comm, cp, cut, date, dirname, expr, fds, fmt, fold, getconf, head, id, join, ln, logname, mkdir, mkfifo, mv, paste, pathchk, rev, rm, rmdir, stty, tail, tee, tty, uname, uniq, wc, sync Those builtins can be enabled via $ builtin name_of_builtin # in shell scripts (note that ksh93 builtins implement exact POSIX behaviour - some commands in Solaris /usr/bin/ directory implement pre-POSIX behaviour. Add /usr/xpg6/bin/:/usr/xpg4/bin before /usr/bin/ in ${PATH} to test whether your script works with the XPG6/POSIX versions)
Use blocks and not subshells if possible, e.g. use $ { print "foo" ; print "bar" ; } instead of $ (print "foo" ; print "bar") # - blocks are faster since they do not require to save the subshell context (ksh93) or trigger a shell child process (Bourne shell, bash, ksh88 etc.)
use long options for "set", for example instead of $ set -x # use $ set -o xtrace # to make the code more readable.
Use $(...) instead of `...` - `...` is an obsolete construct in ksh+POSIX sh scripts and $(...).is a cleaner design, requires no escaping rules, allows easy nesting etc.
ksh93 has support for an alternative version of command substitutions with the syntax ${ ...;} which do not run in a subshell.
Always put the result of $( ... ) or $( ...;) in quotes (e.g. foo="$( ... )" or foo="$( ...;)") unless there is a very good reason for not doing it
Scripts should always set their PATH to make sure they do not use alternative commands by accident (unless the value of PATH is well-known and guaranteed to be set by the caller)
Scripts should make sure that commands in optional packages are really there, e.g. add a "precheck" block in scipts to avoid later failure when doing the main job
Check how boolean values are used in your application.
For example:
mybool=0 # do something if [ $mybool -eq 1 ] ; then do_something_1 ; fi
could be rewritten like this:
mybool=false # (valid values are "true" or "false", pointing # to the builtin equivalents of /bin/true or /bin/false) # do something if ${mybool} ; then do_something_1 ; fi
or
integer mybool=0 # values are 0 or 1 # do something if (( mybool==1 )) ; then do_something_1 ; fi
Shell scripts operate on characters and not bytes. Some locales use multiple bytes (called "multibyte locales") to represent one character
ksh93 has support for binary variables which explicitly operate on bytes, not characters. This is the only allowed exception.
Think about whether your application has to handle file names or variables in multibyte locales and make sure all commands used in your script can handle such characters (e.g. lots of commands in Solaris's /usr/bin/ are not able to handle such values - either use ksh93 builtin constructs (which are guaranteed to be multibyte-aware) or commands from /usr/xpg4/bin/ and/or /usr/xpg6/bin)
Only use external filters like grep/sed/awk/etc. if a significant amount of data is processed by the filter or if benchmarking shows that the use of builtin commands is significantly slower (otherwise the time and resources needed to start the filter are far greater then the amount of data being processed, creating a performance problem).
For example:
if [ "$(echo "$x" | egrep '.*foo.*')" != "" ] ; then do_something ; done
can be re-written using ksh93 builtin constructs, saving several |fork()|+|exec()|'s:
if [[ "${x}" == ~(E).*foo.* ]] ; then do_something ; done
If the first operand of a command is a variable, use ~-- for any command that accepts this as end of argument to avoid problems if the variable expands to a value starting with -.
At least print, /usr/bin/fgrep, /usr/xpg4/bin/fgrep, /usr/bin/grep, /usr/xpg4/bin/grep, /usr/bin/egrep, /usr/xpg4/bin/egrep support ~-- as "end of arguments"-terminator.
Use $ export FOOBAR=val # instead of $ FOOBAR=val ; export FOOBAR # - this is much faster.
Use a subshell (e.g. $ ( mycmd ) #) around places which use set ~-- $(mycmd) and/or shift unless the variable affected is either a local one or if it's guaranteed that this variable will no longer be used (be careful for loadable functions, e.g. ksh/ksh93's autoload !!!!)
Be careful with using TABS in script code, they are not portable between editors or platforms.
If you use ksh93 use $'\t' to include TABs in sources, not the TAB character itself.
If you have multiple points where your application exits with an error message create a central function for this, e.g.
if [ -z "$tmpdir" ] ; then print -u2 "mktemp failed to produce output; aborting." exit 1 fi if [ ! -d $tmpdir ] ; then print -u2 "mktemp failed to create a directory; aborting." exit 1 fi
should be replaced with
function fatal_error { print -u2 "${progname}: $*" exit 1 } # do something (and save ARGV[0] to variable "progname") if [ -z "$tmpdir" ] ; then fatal_error "mktemp failed to produce output; aborting." fi if [ ! -d "$tmpdir" ] ; then fatal_error "mktemp failed to create a directory; aborting." fi
Think about using $ set -o nounset # by default (or at least during the script's development phase) to catch errors where variables are used when they are not set (yet), e.g.
$ **{{code}}(set -o nounset ; print ${foonotset}){{/code}}** /bin/ksh93: foonotset: parameter not set
Avoid using eval unless absolutely necessary. Subtle things can happen when a string is passed back through the shell parser. You can use name references to avoid uses such as eval $name="$value".
Use += instead of manually adding strings/array elements, e.g.
foo="" foo="${foo}a" foo="${foo}b" foo="${foo}c"
should be replaced with
foo="" foo+="a" foo+="b" foo+="c"
Use source instead of '.' (dot) to include other shell script fragments - the new form is much more readable than the tiny dot and a failure can be caught within the script.
Use $"..." instead of gettext ... "..." for strings that need to be localized for different locales. gettext will require a fork()+exec() and reads the whole catalog each time it's called, creating a huge overhead for localisation (and the $"..." is easier to use, e.g. you only have to put a $ in front of the catalog and the string will be localised).
If you don't expect to expand files, you can do set -f (set -o noglob) as well. This way the need to use "" is greatly reduced.
Unless you want to do word splitting, put IFS= at the beginning of a command. This way spaces in file names won't be a problem. You can do IFS='delims' read -r line to override IFS just for the read command. However, you can't do this for the set builtin.
Set the message locale (LC_MESSAGES) if you process output of tools which may be localised
Example�1.�Set LC_MESSAGES when testing for specific outout of the /usr/bin/file utility:
# set french as default message locale export LC_MESSAGES=fr_FR.UTF-8 ... # test whether the file "/tmp" has the filetype "directory" or not # we set LC_MESSAGES to "C" to ensure the returned message is in english if [[ "$(LC_MESSAGES=C file /tmp)" = *directory ]] ; then print "is a directory" fi
The environment variable LC_ALL always overrides any other LC_* environment variables (and LANG, too), including LC_MESSAGES. if there is the chance that LC_ALL may be set replace LC_MESSAGES with LC_ALL in the example above.
Cleanup after yourself. For example ksh/ksh93 have an EXIT trap which is very useful for this.
Note that the EXIT trap is executed for a subshell and each subshell level can run it's own EXIT trap, for example
$ **{{code}}(trap "print bam" EXIT ; (trap "print snap" EXIT ; print "foo")){{/code}}** foo snap bam
Explicitly set the exit code of a script, otherwise the exit code from the last command executed will be used which may trigger problems if the value is unexpected.
Use shcomp -n scriptname.sh /dev/null to check for common problems (such as insecure, depreciated or ambiguous constructs) in shell scripts.
Use functions to break up your code into smaller, logical blocks.
Do not use function names which are reserved keywords (or function names) in C/C++/JAVA or the POSIX shell standard (to avoid confusion and/or future changes/updates to the shell language).
It is highly recommended to use ksh style functions (function foo { ... }) instead of Bourne-style functions (foo() { ... }) if possible (and local variables instead of spamming the global namespace).
The difference between old-style Bourne functions and ksh functions is one of the major differences between ksh88 and ksh93 - ksh88 allowed variables to be local for Bourne-style functions while ksh93 conforms to the POSIX standard and will use a function-local scope for variables declared in Bourne-style functions.
Example (note that "integer" is an alias for "typeset -li"):
# new style function with local variable $ ksh93 -c 'integer x=2 ; function foo { integer x=5 ; } ; print "x=$x" ; foo ; print "x=$x" ;' x=2 x=2 # old style function with an attempt to create a local variable $ ksh93 -c 'integer x=2 ; foo() { integer x=5 ; } ; print "x=$x" ; foo ; print "x=$x" ;' x=2 x=5
[[usr/src/lib/libshell/common/COMPATIBILITY>>Project ksh93-integration.compatibility]] says about this issue:
Functions, defined with name() with ksh-93 are compatible with the POSIX standard, not with ksh-88. No local variables are permitted, and there is no separate scope. Functions defined with the function name syntax, maintain compatibility. This also affects function traces.
(this issue also affects /usr/xpg4/bin/sh in Solaris 10 because it is based on ksh88. This is a bug.).
Explicitly set the return code of a function - otherwise the exit code from the last command executed will be used which may trigger problems if the value is unexpected.
The only allowed exception is if a function uses the shell's errexit mode to leave a function, subshell or the script if a command returns a non-zero exit code.
Use the ksh FPATH (function path) feature to load functions which are shared between scripts and not source - this allows to load such a function on demand and not all at once.
To match cstyle, the shell token equivalent to the C "{" should appear on the same line, separated by a ";", as in:
if [ "$x" = "hello" ] ; then echo $x fi if [[ "$x" = "hello" ]] ; then print $x fi for i in 1 2 3; do echo $i done for ((i=0 ; i < 3 ; i++)); do print $i done while [ $# -gt 0 ]; do echo $1 shift done while (( $# > 0 )); do print $1 shift done
DO NOT use the test builtin. Sorry, executive decision.
In our Bourne shell, the test built-in is the same as the "[" builtin (if you don't believe me, try "type test" or refer to usr/src/cmd/sh/msg.c).
So please do not write:
if test $# -gt 0 ; then
instead use:
if [ $# -gt 0 ] ; then
Use "[[ expr ]]" instead of "[ expr ]" if possible since it avoids going through the whole pattern expansion/etc. machinery and adds additional operators not available in the Bourne shell, such as short-circuit && and ||.
Use "(( ... ))" instead of "[ expr ]" or "[[ expr ]]" expressions.
Example: Replace
i=5 # do something if [ $i -gt 5 ] ; then
with
i=5 # do something if (( i > 5 )) ; then
Use POSIX arithmetic expressions to test for exit/return codes of commands and functions. For example turn
if [ $? -gt 0 ] ; then
into
if (( $? > 0 )) ; then
Make sure that your shell has a "true" builtin (like ksh93) when executing endless loops like $ while true ; do do_something ; done # - otherwise each loop cycle runs a |fork()|+|exec()|-cycle to run /bin/true
It is permissible to use && and || to construct shorthand for an "if" statement in the case where the if statement has a single consequent line:
[ $# -eq 0 ] && exit 0
instead of the longer:
if [ $# -eq 0 ]; then exit 0 fi"
Names of variables local to the current script which are not exported to the environment should be lowercase while variable names which are exported to the environment should be uppercase.
The only exception are global constants (=global readonly variables, e.g. $ float -r M_PI=3.14159265358979323846 # (taken from <math.h>)) which may be allowed to use uppercase names, too.
Uppercase variable names should be avoided because there is a good chance of naming collisions with either special variable names used by the shell (e.g. PWD, SECONDS etc.).
Do not use variable names which are reserved keywords in C/C++/JAVA or the POSIX shell standard (to avoid confusion and/or future changes/updates to the shell language).
The Korn Shell and the POSIX shell standard have many more reserved variable names than the original Bourne shell. All these reserved variable names are spelled uppercase.
Always use '{'+'}' when using variable names longer than one character unless a simple variable name is followed by a blank, /, ;, or $ character (to avoid problems with array, compound variables or accidental misinterpretation by users/shell)
print "$foo=info"
should be rewritten to
print "${foo}=info"
Always put variables into quotes when handling filenames or user input, even if the values are hardcoded or the values appear to be fixed. Otherwise at least two things may go wrong:
As alternative a script may set IFS='' ; set -o noglob to turn off the interpretation of any field seperators and the pattern globbing.
For example the following is very inefficient since it transforms the integer values to strings and back several times:
a=0 b=1 c=2 # more code if [ $a -lt 5 -o $b -gt c ] ; then do_something ; fi
This could be rewritten using ksh constructs:
integer a=0 integer b=1 integer c=2 # more code if (( a < 5 || b > c )) ; then do_something ; fi
Store lists in arrays or associative arrays - this is usually easier to manage.
For example:
x=" /etc/foo /etc/bar /etc/baz " echo $x
can be replaced with
typeset -a mylist mylist[0]="/etc/foo" mylist[1]="/etc/bar" mylist[2]="/etc/baz" print "${mylist[@]}"
or (ksh93-style append entries to a normal (non-associative) array)
typeset -a mylist mylist+=( "/etc/foo" ) mylist+=( "/etc/bar" ) mylist+=( "/etc/baz" ) print "${mylist[@]}"
Arrays may be expanded using two similar subscript operators, @ and *. These subscripts differ only when the variable expansion appears within double quotes. If the variable expansion is between double-quotes, "${mylist[*]}" expands to a single string with the value of each array member separated by the first character of the IFS variable, and "${mylist[@]}" expands each element of name to a separate string.
Example�2.�Difference between [@] and [*] when expanding arrays
typeset -a mylist mylist+=( "/etc/foo" ) mylist+=( "/etc/bar" ) mylist+=( "/etc/baz" ) IFS="," printf "mylist[*]={ 0=|%s| 1=|%s| 2=|%s| 3=|%s| }\n" "${mylist[*]}" printf "mylist[@]={ 0=|%s| 1=|%s| 2=|%s| 3=|%s| }\n" "${mylist[@]}"
will print:
mylist[*]={ 0=|/etc/foo,/etc/bar,/etc/baz| 1=|| 2=|| 3=|| } mylist[@]={ 0=|/etc/foo| 1=|/etc/bar| 2=|/etc/baz| 3=|| }
Use compound variables or associative arrays to group similar variables together.
For example:
box_width=56 box_height=10 box_depth=19 echo "${box_width} ${box_height} ${box_depth}"
could be rewritten to ("associative array"-style)
typeset -A -E box=( [width]=56 [height]=10 [depth]=19 ) print ~-- "${box[width]} ${box[height]} ${box[depth]}"
or ("compound variable"-style
box=( float width=56 float height=10 float depth=19 ) print ~-- "${box.width} ${box.height} ${box.depth}"
The behaviour of "echo" is not portable (e.g. System V, BSD, UCB and ksh93/bash shell builtin versions all slightly differ in functionality) and should be avoided if possible. POSIX defines the "printf" command as replacement which provides more flexible and portable behaviour.
Korn shell scripts should prefer the "print" builtin which was introduced as replacement for "echo".
Use $ print ~-- ${varname}" # when there is the slightest chance that the variable "varname" may contain symbols like "-". Or better use "printf" instead, for example
integer fx # do something print $fx
may fail if "f" contains a negative value. A better way may be to use
integer fx # do something printf "%d\n" fx
Use redirect and not exec to open files - exec will terminate the current function or script if an error occurs while redirect just returns a non-zero exit code which can be caught.
Example:
if redirect 5</etc/profile ; then print "file open ok" head <&5 else print "could not open file" fi
Each of the redirections above trigger an |open()|,|write()|,|close()|-sequence. It is much more efficient (and faster) to group the rediction into a block, e.g. { echo "foo" ; echo "bar" ; echo "baz" } >xxx #
Avoid the creation of temporary files and store the values in variables instead if possible
Example:
ls -1 >xxx for i in $(cat xxx) ; do do_something ; done
can be replaced with
x="$(ls -1)" for i in ${x} ; do do_something ; done
ksh93 supports binary variables (e.g. typeset -b varname) which can hold any value.
If you create more than one temporary file create an unique subdir for these files and make sure the dir is writable. Make sure you cleanup after yourself (unless you are debugging).
When opening a file use {n}<file, where n is an integer variable rather than specifying a fixed descriptor number.
This is highly recommended in functions to avoid that fixed file descriptor numbers interfere with the calling script.
Example�3.�Open a network connection and store the file descriptor number in a variable
function cat_http { integer netfd ... # open TCP channel redirect {netfd}<>"/dev/tcp/${host}/${port}" # send HTTP request&${netfd} # collect response and send it to stdout cat <&${netfd} # close connection exec {netfd}<&- ... }
Use inline here documents, for example
command <<< $x
rather than
print -r ~-- "$x" | command
Use the -r option of read to read a line. You never know when a line will end in \ and without a -r multiple lines can be read.
Print compound variables using print -C varname or print -v varname to make sure that non-printable characters are correctly encoded.
Example�4.�Print compound variable with non-printable characters
compound x=( a=5 b="hello" c=( d=9 e="$(printf "1\v3")" [[image:images/callouts/1.png||alt="1"height=""]] ) ) print -v x
will print:
( a=5 b=hello c=( d=9 e=$'1\0133' [[image:images/callouts/1.png||alt="1"height=""]] ) )
Put the command name and arguments before redirections. You can legally do $ > file date instead of date > file but don't do it.
Enable the gmacseditor mode before reading user input using the read builtin to enable the use of cursor+backspace+delete keys in the edit line
Example�5.�Prompt user for a string with gmacs editor mode enabled
set -o gmacs [[image:images/callouts/1.png||alt="1"height=""]] typeset inputstring="default value" ... read -v[[image:images/callouts/2.png||alt="2"height=""]] inputstring[[image:images/callouts/3.png||alt="3"height=""]]?"Please enter a string: "[[image:images/callouts/4.png||alt="4"height=""]] ... printf "The user entered the following string: '%s'\n" "${inputstring}" ...
Use builtin (POSIX shell) arithmetic expressions instead of expr, bc, dc, awk, nawk or perl.
ksh93 supports C99-like floating-point arithmetic including special values such as +Inf, -Inf, +NaN, -NaN.
Use floating-point arithmetic expressions if calculations may trigger a division by zero or other exceptions - floating point arithmetic expressions in ksh93 support special values such as +Inf/-Inf and +NaN/-NaN which can greatly simplify testing for error conditions, e.g. instead of a trap or explicit if ... then... else checks for every sub-expression you can check the results for such special values.
Example:
$ **{{code}}ksh93 -c 'integer i=0 j=5 ; print ~-- "x=$((j/i)) "'{{/code}}** ksh93: line 1: j/i: divide by zero $ **{{code}}ksh93 -c 'float i=0 j=-5 ; print ~-- "x=$((j/i)) "'{{/code}}** x=-Inf
Use printf "%a" when passing floating-point values between scripts or as output of a function to avoid rounding errors when converting between bases.
Example:
function xxx { float val (( val=sin(5.) )) printf "%a\n" val } float out (( out=$(xxx) )) xxx print ~-- $out
This will print:
-0.9589242747 -0x1.eaf81f5e09933226af13e5563bc6p-01
Put constant values into readonly variables
For example:
float -r M_PI=3.14159265358979323846
or
float M_PI=3.14159265358979323846 readonly M_PI
Avoid string to number and/or number to string conversions in arithmetic expressions expressions to avoid performance degradation and rounding errors.
Example�6.�(( x=$x*2 )) vs. (( x=x*2 ))
float x ... (( x=$x*2 ))
will convert the variable "x" (stored in the machine's native |long double| datatype) to a string value in base10 format, apply pattern expansion (globbing), then insert this string into the arithmetic expressions and parse the value which converts it into the internal |long double| datatype format again. This is both slow and generates rounding errors when converting the floating-point value between the internal base2 and the base10 representation of the string.
The correct usage would be:
float x ... (( x=x*2 ))
e.g. omit the '$' because it's (at least) redundant within arithmetic expressions.
Example�7.�x=$(( y+5.5 )) vs. (( x=y+5.5 ))
float x float y=7.1 ... x=$(( y+5.5 ))
will calculate the value of y+5.5, convert it to a base-10 string value amd assign the value to the floating-point variable x again which will convert the string value back to the internal |long double| datatype format again.
The correct usage would be:
float x float y=7.1 ... (( x=y+5.5 ))
i.e. this will save the string conversions and avoid any base2-->base10-->base2-conversions.
Set LC_NUMERIC when using floating-point constants to avoid problems with radix-point representations which differ from the representation used in the script, for example the de_DE.* locale use ',' instead of '.' as default radix point symbol.
For example:
# Make sure all math stuff runs in the "C" locale to avoid problems with alternative # radix point representations (e.g. ',' instead of '.' in de_DE.*-locales). This # needs to be set _before_ any floating-point constants are defined in this script) if [[ "${LC_ALL}" != "" ]] ; then export \ LC_MONETARY="${LC_ALL}" \ LC_MESSAGES="${LC_ALL}" \ LC_COLLATE="${LC_ALL}" \ LC_CTYPE="${LC_ALL}" unset LC_ALL fi export LC_NUMERIC=C ... float -r M_PI=3.14159265358979323846
The environment variable LC_ALL always overrides all other LC_* variables, including LC_NUMERIC. The script should always protect itself against custom LC_NUMERIC and LC_ALL values as shown in the example above.
Put [${LINENO}] in your PS4 prompt so that you will get line numbers with you run with -x. If you are looking at performance issues put $SECONDS in the PS4 prompt as well.
|
http://web.archive.org/web/20121103080047/http:/hub.opensolaris.org/bin/view/Project+shell/shellstyle
|
CC-MAIN-2017-30
|
refinedweb
| 3,944
| 59.43
|
Removing Custom Class Instances¶
The following text was taken from the Panda3D 1.6 Game Engine Beginner’s Guide available from Packt Publishing with the author’s permission. The text refers to a “custom class”, which is a python class that is not part of the Panda3D SDK. Here is an example of a custom class:
class MyClass: def __init__(self): myVar1 = 10 myVar2 = 20 def myMethod(self): return (self.myVar1, self.myVar2)
From Panda3D 1.6 Game Engine Beginner’s Guide:
Python will automatically garbage collect a custom class instance when all the references to that instance are removed. In theory, this makes garbage collection as simple as cleaning up those references, but because there are so many different places and reasons for these references garbage collection can quickly grow complicated. Following these steps will help to ensure that a custom class instance is properly garbage collected.
Call
removeNode()on all NodePaths in the scene graph – The first step is to clear out the NodePaths that the custom class has added to the scene graph. If this step isn’t accomplished, it won’t necessarily prevent the custom class instance from being garbage collected, but it could. Even if the custom class instance is still garbage collected the scene graph itself will retain references to the NodePaths that haven’t been cleared out and they will remain in the scene graph. There is one exception to this rule: when a parent NodePath has removeNode called on it that ultimately result in the removal of its child NodePaths, so long as nothing else retains a reference to them. However, relying on this behavior is an easy way to make mistakes so it’s better to manually remove all of the NodePaths a custom class adds to the scene graph.
Call
delete()on all Actors – Just calling
removeNode()on an Actor isn’t enough. Calling
delete()will remove ties to animations, exposed joints, and so on to ensure that all the extra components of the Actor are removed from memory as well.
Set all Intervals, Sequences, and Parallels equal to None – It’s very common for Intervals, Sequences, and Parallels to retain references to something in the class and prevent the class instance from being cleaned up. To be safe, it’s best to remove the references to these Intervals so that they get cleaned up themselves and any references they have to the class are removed.
Detach all 3D sounds connected to class NodePaths – 3D sounds won’t actually retain references to the custom class, but if the NodePaths they are attached to are removed with
removeNode()and the sounds aren’t detached, they’ll generate an error and crash the program when they try to access the removed NodePaths. Play it safe and detach the sounds.
End all tasks running in the class – The task manager will retain a reference to the class instance so long as the class instance has a task running, so set up all of the tasks in the custom class to end themselves with return task.done. This is the most reliable way to stop them and clear the reference to the custom class in the task manager.
If the custom class inherits from DirectObject, call
self.ignoreAll()– Panda3D’s message system will also retain a reference to the custom class if it is set up to receive messages. To be on the safe side, every class that inherits from DirectObject and will be deleted during run time should call
self.ignoreAll()to tell the message system that the class is no longer listening to messages. That will remove the reference.
Remove all direct references to the custom class instance – Naturally, the custom class instance won’t get cleaned up if something is referencing it directly, either through a circular self reference, or because it was created as a “child” of another class and that other class has a reference to it stored as a variable. All of these references need to be removed. This also includes references to the custom class instance placed in PythonTags.
The
__del__ method is a good way to test if a custom class is being garbage
collected. The
__del__ method is similar to the
__init__ method in that
we don’t call it ourselves; it gets called when something happens.
__init__
is called when a new instance of the class is created;
__del__ is called
when an instance of the class is garbage collected. It’s a pretty common thought
to want to put some important clean up steps in the
__del__ method itself,
but this isn’t wise. In fact, it’s best not to have a
__del__ method in any
of our classes in the final product because the
__del__ method can actually
hinder proper garbage collection. A better usage is to put a simple print
statement in the
__del__ method that will serve as a notifier that Python
has garbage collected the custom class instance. For example:
def __del__(self): print("Instance of Custom Class Alpha Removed")
Once we’ve confirmed that our custom class is being garbage collected properly,
we can remove the
__del__ method.
|
https://docs.panda3d.org/1.10/cpp/programming/garbage-collection/removing-custom-class-instances
|
CC-MAIN-2020-45
|
refinedweb
| 861
| 56.69
|
* Brian Dessent wrote on Sat, Apr 19, 2008 at 03:59:19PM CEST: > Ralf Wildenhues wrote: > > <> > > The MULTITARGETS and foo_{TARGETS,SOURCES,COMMAND} syntax that you came > up with is certainly in line with the Automake way of doing things, but > the observation that writing a MULTITARGET rule looks nothing like a > normal rule is valid I think. And ideally you shouldn't have to worry > about the stamp or lock names at all, they are implementation details. Agreed. > I know this will sound a little crazy, but what about simply an > AUTOMAKE_OPTIONS/AM_INIT_AUTOMAKE keyword that says "whenever I write a > rule with more than one target, assume I want the 'one program, multiple > outputs' semantics and not the traditional make semantics." Automake > would transparently handle coming up with lock and stamp names and > adding them to a 'clean' target, as well as emitting all the boring lock > stuff around the recipe. This is a DWIM kind of idea, since I have the > sense that people do in fact write such rules with the expectation of > "one program, multiple outputs" semantics. And it would of course > default to off so that there's no worry of it regressing anything > existing; and it can be enabled per-file via AUTOMAKE_OPTIONS. Do you mean that, given that keyword, all rules of the form target1 target2 : prereq ... command ... should be rewritten to be a multiple-target rule? Ugh. That would violate the "input appears in output" quite heavily. And be rather inflexible in that you cannot at the same time have in the same Makefile.am a rule that is meant to be one separate for each target. In this case I would prefer inventing a new syntax (like ::: as suggested by Olly in the other thread), at least that avoids ambiguities. More questions, giving the whole thing only a couple minutes thought: - does this scale? It's not all that useful if Bob has to write one such rule for each of his set of files: that's exactly what he wanted to avoid. - can automake extract all needed information if, say, the targets are not given literally but as either $(macro) or $(substituted_macro) or @address@hidden - are we safe on the stamp namespace? Probably yes, we can just use a counter and an automake-reserved prefix or so. Cheers, Ralf
|
http://lists.gnu.org/archive/html/automake/2008-04/msg00053.html
|
CC-MAIN-2016-36
|
refinedweb
| 389
| 64.64
|
Minitest is inspired by Ruby minispec.
Project Description
This project is inspired by Ruby minispec, but now it just implement some methods including:
must_equal, must_true, must_false, must_raise, must_output, only_test.
And some other useful functions:
p, pp, pl, ppl, length, size, inject, flag_test, p_format, pp_format, pl_format, ppl_format, capture_output.
github:
pypi:
How to install
pip install minitest
How to use
For a simple example, you just write a function called x, and I would like to write the unittest in same file as: code:
if __name__ == '__main__': # import the minitest from minitest import * import operator # declare a variable for test tself = get_test_self() # you could put all your test variables on tself # just like declare your variables on setup. tself.jc = "jc" # declare a test with test(object.must_equal): tself.jc.must_equal('jc') None.must_equal(None) with test(object.must_true): True.must_true() False.must_true() with test(object.must_false): True.must_false() False.must_false() # using a funcation to test equal. with test("object.must_equal_with_func"): (1).must_equal(1, key=operator.eq) (1).must_equal(2, key=operator.eq) def div_zero(): 1/0 # test exception with test("test must_raise"): (lambda : div_zero()).must_raise(ZeroDivisionError) (lambda : div_zero()).must_raise(ZeroDivisionError, "integer division or modulo by zero") (lambda : div_zero()).must_raise(ZeroDivisionError, "in") # when assertion fails, it will show the failure_msg with test("with failure_msg"): the_number = 10 (the_number % 2).must_equal(1, failure_msg="{0} is the number".format(the_number)) # it wont show the failure_msg (the_number % 2).must_equal(0, failure_msg="{0} is the number".format(the_number)) (True).must_false( failure_msg="{0} is the number".format(the_number)) (lambda : div_zero()).must_raise(ZeroDivisionError, "in", failure_msg="{0} is the number".format(the_number)) def print_msg_twice(msg): print msg print msg return msg with test("capture_output"): with capture_output() as output: result = print_msg_twice("foobar") result.must_equal("foobar") output.must_equal(["foobar","foobar"]) with test("must output"): (lambda : print_msg_twice("foobar")).must_output( ["foobar","foobar"]) (lambda : print_msg_twice("foobar")).must_output( ["foobar","wrong"])
result:
Running tests: .FFFF. Finished tests in 0.013165s. 1) Failure: File "/Users/Colin/work/minitest/test.py", line 29, in <module>: EXPECTED: True ACTUAL: False 2) Failure: File "/Users/Colin/work/minitest/test.py", line 32, in <module>: EXPECTED: False ACTUAL: True 3) Failure: File "/Users/Colin/work/minitest/test.py", line 38, in <module>: EXPECTED: 2 ACTUAL: 1 4) Failure: File "/Users/Colin/work/minitest/test.py", line 47, in <module>: EXPECTED: 'in' ACTUAL: 'integer division or modulo by zero' 5) Failure: File "/Users/colin/work/minitest/test.py", line 86, in <module>: MESSAGE: '10 is the number' EXPECTED: 1 ACTUAL: 0 6) Failure: File "/Users/colin/work/minitest/test.py", line 92, in <module>: MESSAGE: '10 is the number' EXPECTED: False ACTUAL: True 7) Failure: File "/Users/colin/work/minitest/test.py", line 95, in <module>: MESSAGE: '10 is the number' EXPECTED: 'in' ACTUAL: 'integer division or modulo by zero' 8) Failure: File "/Users/colin/work/minitest/test.py", line 102, in <module>: EXPECTED: ['foobar', 'wrong'] ACTUAL: ['foobar', 'foobar'] 12 tests, 22 assertions, 8 failures, 0 errors. [Finished in 0.1s]
only_test function
If you just want to run some test functions, you can use only_test funtion to specify them. Notice, you must put it on the top of test functions, just like the below example. code:
def foo(): return "foo" def bar(): return "bar" if __name__ == '__main__': from minitest import * only_test("for only run", foo) with test("for only run"): (1).must_equal(1) (2).must_equal(2) pass with test("other"): (1).must_equal(1) (2).must_equal(2) pass with test(foo): foo().must_equal("foo") with test(bar): bar().must_equal("bar")
It will only run test(“for only run”) and test(foo) for you.
Other useful function
capture_output, p, pp, pl, ppl, length, size, p_format, pp_format, pl_format, ppl_format these functions could been used by any object.
capture_output, capture the standard output. This function will print variable name as the title. code: def print_msg_twice(msg): print msg print msg return msg
with capture_output() as output: result = print_msg_twice("foobar") print result print output
print result:
"foobar" ["foobar","foobar"]
p, print with title. This function will print variable name as the title. code:
value = "Minitest" value.p() value.p("It is a value:") value.p(auto_get_title=False)
print result:
value : 'Minitest' It is a value: 'Minitest' 'Minitest'
pp, pretty print with title. This function will print variable name as the title in the first line, then pretty print the content of variable below the title. code:
value = "Minitest" value.pp() value.pp("It is a value:") value.pp(auto_get_title=False)
print result:
value : 'Minitest' It is a value: 'Minitest' 'Minitest'
pl, print with title and code loction. This function just like pt, but will print the code location at the first line. And some editors support to go to the line of that file, such as Sublime2. code:
value = "Minitest" value.pl() value.pl("It is a value:") valuepl, pretty print with title and code loction. This function just like ppt, but will print the code location at the first line. Notice: it will print a null line firstly. code:
value = "Minitest" value.ppl() value.ppl("It is a value:") value.p_format, get the string just like p function prints. I use it in debugging with log, like: logging.debug(value.p_format()) code:
value = "Minitest" value.p_format()
return result:
value : 'Minitest'
pp_format, get the string just like pp function prints. I use it in debugging with log, like: logging.debug(value.pp_format()) code:
value = "Minitest" value.pp_format()
return result:
value :\n'Minitest'
pl_format, get the string just like pl function prints. I use it in debugging with log, like: logging.debug(value.pl_format()) code:
value = "Minitest" value.pl_format()
return result:
line info: File "/Users/Colin/work/minitest/test.py", line 76, in <module>\nvalue : 'Minitest'
ppl_format, get the string just like ppl function prints. I use it in debugging with log, like: logging.debug(value.ppl_format()) code:
value = "Minitest" value.ppl_format()
return result:
line info: File "/Users/Colin/work/minitest/test.py", line 76, in <module>\nvalue :\n'Minitest'
length and size will invoke len function for the caller’s object. code:
[1,2].length() # 2, just like len([1,2]) (1,2).size() # 2, just like len((1,2))
inject_customized_must_method or inject function will inject the function which you customize. Why do I make this function? Since in many case I will use numpy array. When it comes to comparing two numpy array, I have to use:
import numpy numpy.array([1]).must_equal(numpy.array([1.0]), numpy.allclose)
For being convient, I use inject_customized_must_method or inject function like:
import numpy inject(numpy.allclose, 'must_close') numpy.array([1]).must_close(numpy.array([1.0]))
flag_test will print a message ‘There are codes for test in this place!’ with the code loction. code:
flag_test() # add a title flag_test("for test")
print result:
File "/Users/colin/work/minitest/test.py", line 97, in <module>: There are test codes in this place! File "/Users/colin/work/minitest/test.py", line 101, in <module>: for test : There are test codes in this place!
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/minitest/
|
CC-MAIN-2018-17
|
refinedweb
| 1,177
| 52.05
|
Hi guys i have a very difficult problem in hands. Im trying to render part of the ghosts when my player passes close to them. They are 3d objects and i really need to render just part of them based on player's distance or player's collider.
Any clue?
Answer by tanoshimi
·
Sep 03, 2013 at 04:49 PM
Hi,
I'm having a little problem matching up your description with the diagram (which seems instead to depict a 2d object, where only part of becomes visible when it it contained inside a yellow circle - I can see where fafase's comparison to "Luigi's Mansion" came from... ;)
Anyway, one approach to solve the problem described would be to create a shader that exposes a Float3 property, (call it "player_pos", say) and then use the Update() function of a script to set that property to the player's transform.position. Unity will provide this value to the shader in world coordinates, so in the vertex program of the shader you'll need to transform the model into world coordinates too, which you can do using the inbuilt _Object2World matrix:
input.pos_in_world_space = mul(_Object2World, input.vertex);
Then, in the fragment function, calculate the distance between the player and the model (now both in world coordinates) and return the appropriate colour based on how far away they are:
float dist = distance(input.pos_in_world_space, player_pos);
if (dist < VisibleDistance) {
return VisibleColour;
}
else {
return InvisibleColour;
}
(I know that "InvisibleColour" is a bit of an oxymoron, but I wasn't sure if you wanted enemies that were far away to still be partially visible, say, in which case just return a colour with a low alpha value).
This is pseudocode because I'm not near the right computer at the moment - let me know if it doesn't make any sense and I might be able to whip up a proper example later.
--- EDIT ---
Ok, so more detailed explanation follows:
First, create a new shader, like this:
LOD 200;
}
//;
}
}
ENDCG
}
}
//FallBack "Diffuse"
}
Now, create a new material and select the newly-created Custom/Proximity shader as its shader. Apply this material onto your "ghost" object (the thing you want to get visible when the player gets close to it), and set appropriate values for the texture, visibility distance, outline width properties.
Now create a new Javascript as follows:
#pragma strict
// Take effect even in edit mode
@script ExecuteInEditMode()
// Get a reference to the player
public var player : Transform;
function Update ()
{
if (player != null) {
// Pass the player location to the shader
renderer.sharedMaterial.SetVector("_PlayerPosition", player.position);
}
}
Add this script onto the ghost object too - it is used to tell the shader where the player is each frame, so that it can decide how to colour itself accordingly. Then drag your player object onto the "Player" property slot of the script.
It should look like this:
Drag the player around and up and down and you should see only that part of the ghost object within the specified _VisibilityDistance appear. The rest should appear very faintly transparent, so you can see what's going on. If you want the rest to be completely invisible, you can replace the contents of the final else() in the fragment shader with a "discard;"
Sorry, your explanation is really great, the problem is me. I don't know much about creating shaders, and i think because of that i can not understand your explanation. Could you make a explanation for dummies? XD I Think that there are a lot of people that would really like to make this effect.
I've edited the answer to include sample code - please give this a try and see how you get on.
I don't know what to say, is exactly the effect that i want it. You must be a really good developer, you must be proud of yourself. Thank you, and i hope you not all the world's luck because some is for me, but i want you to have a lot of it. ^^
Answer by SMJMoloney
·
Oct 20, 2015 at 11:43 PM
I know this is an old thread but if anyone is getting the "not enough numerical arguments" error that I was, just change the highlighted parts from float2 to float4 on line 56 and 62 of the full script.
//;
}
}
Also, if you don't like Javascript or it's just being difficult with Unity 5, here it is in C#
using UnityEngine;
using System.Collections;
public class ProximityXray : MonoBehaviour {
public Transform player;
Renderer render;
// Use this for initialization void Start () {
render = gameObject.GetComponent<Renderer>();
} // Update is called once per frame void Update () {
render.sharedMaterial.SetVector("_PlayerPosition", player.position);
}
}
Is this methood is still working in unity 5.3? I cant get it to work :/
It is, indeed. Just tested it there in 5.3.2f1 and added a small demo project for you.
The reason I used this was to actually do the opposite. If you change the less than symbols to greater than symbols on lines 55 and 58 of the shader, you get the inverse effect, though the outline disappears. Doing this, I used it for a stealth prototype. Excellent shader. (The project)
This is really GOOD stuff! It even workd with 3d stuff! Thank you very much. I'm thinking of using it as a fog of war.
I don't know anything about shaders, so that is the reason I come to you once again. Is there a way to retain the original materials and just add the proximity effect. So as to be able to apply this effects to all object in the scene despite having different materials. Is it possible to code something that stores the original shader variables and applies only the proximity effect?
thank you for your time <3 :D'
2
Answers
Modifying a shader color dynamicly through another script
0
Answers
How to render part of an object
2
Answers
Choose color/palette swap?
1
Answer
Rendering problem with opaque objects with the same Mat rendering through each other (Grouped to the same empty object)
0
Answers
|
https://answers.unity.com/questions/529814/how-to-have-2-different-objects-at-the-same-place.html
|
CC-MAIN-2019-43
|
refinedweb
| 1,020
| 58.82
|
17 May 2012 06:30 [Source: ICIS news]
KUALA LUMPUR (ICIS)--Siam Mitsui is currently running its three purified terepthalic acid (PTA) units in Map Ta Phut, ?xml:namespace>
The company's PTA units at the site have a combined nameplate capacity of 1.45m tonnes/year.
“About 20,000 tonnes of PTA is being moved monthly from
This trade flow, which is unusual, came about because of the shutdown of Mitsui Chemicals’ plants at its Iwakuni petrochemical site since April, following an explosion. The company has a 400,000 tonne/year PTA plant at the site.
“It remains unclear when the Iwakuni unit can be restarted,” the source said. Officials from Mitsui Chemicals could not be immediately reached for comments.
The company is a joint-venture company between
|
http://www.icis.com/Articles/2012/05/17/9560475/apic-12-siam-mitsui-runs-map-ta-phut-pta-units-at-full-capacity.html
|
CC-MAIN-2014-35
|
refinedweb
| 129
| 62.68
|
Developer forums (C::B DEVELOPMENT STRICTLY!) > Plugins development
Plug-Ins : some suggested "case" conventions
(1/3) > >>
killerbot:
When ones creates a plug-in for Code::Blocks (focusing here on contrib plug-ins that should work under Windows and linux), the plug-in project will contain several files :
1) Source files
2) Project files (*.cbp, Makefile.am's ..)
3) manifest file
4) resource files
This causes the plug-in name (or it's ID, or whatever you like to call it) to show up in several places. The majority of those places require that name to be identical, meaning using the same case !! [On windows this is no issue, but on linux it is very important].
Let's have a look at those places where that name shows up.
But first let's decide on a name for our test case plug-in : "TestCase".
--- Quote ---This would be the *suggestion* : start every word in the name with an uppercase : JustLikeThis
--- End quote ---
Allrighty, let's continue our excursion :
1) manifest.xml
--- Code: ---<Plugin name="TestCase">
--- End code ---
--> it states the plug-in name
2) the zip file of the resources : "TestCase.zip"
This zip file is created as a post-build step, so the correct command needs to show up in the following project files :
TestCase.cbp (to build on windows with CB)
TestCase-unix.cbp (to build on linux with CB)
Makefile.am : to build on linux and windows
Some example snippets out of those project files :
--- Code: --- <ExtraCommands>
<Add after="zip -j9 ..\..\..\devel\share\codeblocks\TestCase.zip resources\manifest.xml resources\*.xrc" />
<Mode after="always" />
</ExtraCommands>
--- End code ---
--- Code: --- <ExtraCommands>
<Add after="zip -j9 ../../../devel/share/codeblocks/TestCase.zip resources/manifest.xml resources/*.xrc" />
<Mode after="always" />
</ExtraCommands>
--- End code ---
--- Code: ---EXTRA_DIST = MyFirst.xrc MySecond.xrc manifest.xml
pkgdata_DATA = TestCase.zip
CLEANFILES = $(pkgdata_DATA)
TestCase.zip:
PWD=`pwd` cd $(srcdir) && zip $(PWD)/TestCase.zip manifest.xml *.xrc > /dev/null
--- End code ---
3) The code registering the plug-in and loading the resource :
TestCase.cpp :
--- Code: ---// Register the plugin
namespace
{
PluginRegistrant<TestCase> reg(_T("TestCase"));
};
/* ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- */
TestCase::TestCase()
{
//ctor
if(!Manager::LoadResource(_T("TestCase.zip")))
{
NotifyMissingFile(_T("TestCase.zip"));
}
}// end of constructor
--- End code ---
4) the output "shared" library (*.so or *.dll)
Again some example snippets from the project files
--- Code: ---<Option output="..\..\..\devel\share\CodeBlocks\plugins\TestCase.dll" prefix_auto="0" extension_auto="0" />
--- End code ---
--- Code: ---<Option output="../../../devel/share/codeblocks/plugins/libTestCase.so" />
--- End code ---
--- Code: ---lib_LTLIBRARIES = libSymTab.la
libTestCase_la_LDFLAGS = -module -version-info 0:1:0 -shared -no-undefined -avoid-version
libTestCase_la_LIBADD =
libTestCase_la_SOURCES =
--- End code ---
And for consistency it's also nice if you project names are alike :
--- Code: --- <Project>
<Option title="TestCase" />
--- End code ---
takeshi miya:
I'd say:
1) CamelCase
2) lowercase
3) CamelCase
4) lowercase
5) CamelCase
In short, lowercase for all filenames, and CamelCase for everything else (project names, classes...).
mandrav:
--- Quote from: Takeshi Miya on October 24, 2006, 03:01:38 pm ---I'd say:
1) CamelCase
2) lowercase
3) CamelCase
4) lowercase
5) CamelCase
In short, lowercase for all filenames, and CamelCase for everything else (project names, classes...).
--- End quote ---
You missed the point completely.
All 5 points must match, case-sensitively (even under windows).
thomas:
--- Quote from: mandrav on October 24, 2006, 03:51:16 pm ---You missed the point completely.
All 5 points must match, case-sensitively (even under windows).
--- End quote ---
How dare you say that! Takeshi knows.
takeshi miya:
--- Quote from: mandrav on October 24, 2006, 03:51:16 pm ---You missed the point completely.
--- End quote ---
Sorry, but yes, I understood Lieven's post as something to discuss and to change...
Not how it is now.
The title "suggested case conventions" doesn't sounds like "enforced case conventions".
If this was meant as a documentation and not to discuss, one would expected it in the Wiki.
--- Quote from: thomas on October 24, 2006, 04:04:01 pm ---How dare you say that! Takeshi knows.
--- End quote ---
Bleh, but thanks for the attitude anyways.
Navigation
[0] Message Index
[#] Next pageGo to full version
|
https://forums.codeblocks.org/index.php/topic,4290.0/wap2.html?PHPSESSID=53833cb6248503dbfea4891d0a1012aa
|
CC-MAIN-2021-43
|
refinedweb
| 672
| 56.66
|
shrink cells composing PolyData More...
#include <vtkShrinkPolyData.h>
shrink cells composing PolyData
vtkShrinkPolyData shrinks cells composing a polygonal dataset (e.g., vertices, lines, polygons, and triangle strips) towards their centroid. The centroid of a cell is computed as the average position of the cell points. Shrinking results in disconnecting the cells from one another. The output dataset type of this filter is polygonal data.
During execution the filter passes its input cell data to its output. Point data attributes are copied to the points created during the shrinking process.
Definition at line 154 of file vtkShrinkPolyData.h.
Definition at line 158 of file vtkShrinkPolyData.h..
Set the fraction of shrink for each cell.
Get the fraction of shrink for each cell.
This is called by the superclass.
This is the method you should override.
Reimplemented from vtkPolyDataAlgorithm.
Definition at line 180 of file vtkShrinkPolyData.h.
|
https://vtk.org/doc/nightly/html/classvtkShrinkPolyData.html
|
CC-MAIN-2021-39
|
refinedweb
| 145
| 62.54
|
Download presentation
Presentation is loading. Please wait.
Published byAlexander Allis Modified about 1 year ago
1
Social Media and Teaching Tools by Hongmei Chi
2
Social Media Blogging Teaching Tools Piazza Outlines
3
The most three most trusted forms of advertising are: Recommendations from people I know - 90% Consumer opinions posted online - 70% Branded websites - 70% What Social Media has done is make the traditional two-way word of mouth marketing accessible and available to everyone with a computer (or phone). FROM THIS TO THIS The question is, how do we engage the guy with the megaphone? Word-of-Mouth Goes Virtual
4 What is Social Media?
5
Facebook LinkedIn Twitter LiveJournal CafeMom The BIG FIVE
6
def (wikipedia.com) - "A type of website with regular entries of commentary, descriptions of events, or other material such as graphics or video." The Stats Approximately 200 million blogs 90% of blogs fail in the first year The majority of web users interact with blogs every single day Blogging
7
Why blog? Constantly updated content Gives visitors a reason to check out your site again and again Become an EXPERT in your field Search Engine Optimization ENGAGEMENT with customers Blogging
8
How blog? Common Blogging Platforms Blogger, Blogspot, Wordpress Minimum of 2x per month 1000 words or less per entry Blogs are NOT press releases EACH post should have some interactive/multimedia element Polls Video, Photos Links to other relevant sites Lists Enable comments and only filter them if they are explicit or spam Integrate all of your other social media platforms (“Add to Any”;“Tweetmeme”; “Meebo”) Blogging
9
10
The Stats 400 million users The average U.S. Internet user spends more time on Facebook than on Google, Yahoo, YouTube, Microsoft, Wikipedia and Amazon combined. Average age is 38 #2 visited site (after Google) The New Oxford American Dictionary voted “unfriend” as the 2009 Word of the Year. Facebook
11
Use as learning management system (LMS). If you do not have access to Blackboard, Moodle, Desire to Learn or other LMS, you can use Facebook to share documents, poll/quiz your students, and conduct group discussions. Reference citations. Facebook has hundreds of applications (apps) that can be used for educational purposes. Worldcat. org’s CiteMe is an app that provides formatted citations for books. 3. Announcements. Send out reminders, upcoming, events and schedule changes. FIVE FACEBOOK IDEAS FOR TEACHING AND LEARNING
12
Log a teachable moment. Athletic training students can tweet about variations of skills they learn during their clinical experiences, such as modifications to a Lachman’s test.. Quiz. Send quiz questions to your class and provide bonus points to students who respond within a given timeframe. Track a concept. Present a concept in class and ask students to tweet about the concept when they read about it in the professional literature. (**Note: Be sure students remember patient confidentiality and to not tweet about current athletes or their conditions at their site.) Track time. Athletic training students can use Twitter to keep track of their time spent in their clinical settings. Learning Diary. Students can keep a journal of the things that they learn during their clinical rotations. At the end of the week, a weekly reflection journal exercise can be submitted. Five Twitter Ideas for Teaching and Learning
13
Post class notes. Post documents with descriptions in any file format on Facebook. Create group discussions. Split your class in to smaller study groups for class projects. You can keep track of student’s participation, provide guidance, and monitor progress FIVE FACEBOOK IDEAS FOR TEACHING AND LEARNING (con’t)
Similar presentations
© 2017 SlidePlayer.com Inc.
|
http://slideplayer.com/slide/3550469/
|
CC-MAIN-2017-09
|
refinedweb
| 604
| 51.48
|
My intention is not for this to be a massive how-to on HTTP Modules - I think this little DLL is quite useful (I know that there are similar libraries out there that does mostly the same). Hopefully, posting it on Code Project will get it out to as many people as possible.
The
Rewrite class implements the
IHttpModule interface. This interface is bound to the ASP.NET app in the
httpModules section of the
System.Web section in the Web.Config.
Add this line to your
<system.web> section of your Web.Config file:
<httpModules> <add type="Rewriter.Rewrite, Rewriter" name="Rewriter" /> </httpModules>
The interface is implemented as follows:
public class Rewrite : System.Web.IHttpModule {
The
Init function of the interface wires up a
BeginRequest event handler. This ensures that as the page is requested, our custom handler intercepts the call, allowing us to replace the path with a new path.
public void Init(System.Web.HttpApplication Appl) { Appl.BeginRequest+=new System.EventHandler(Rewrite_BeginRequest); }
The
Dispose function is required by the interface.
public void Dispose() { // Empty }
The
BeginRequest handler sequence receives the
HttpApplication context through the
sender object.
public void Rewrite_BeginRequest(object sender, System.EventArgs args) { System.Web.HttpApplication Appl = (System.Web.HttpApplication)sender; string path = Appl.Request.Path; string indicator = "/Content/"; ... More Code ... }
This gives the path that we can then process using the
GetTab function. This function queries the DotNetNuke TabController system to give us the correct Tab ID and subsequent path, given an original path. The indicator I use is a directory string "/Content/" to indicate that this is different from a normal query.
private string GetTab(string path, string indicator) { ... }
The
SendToNewUrl initiates a rewrite of the path. There are two mechanisms that are possible:
Context.RewritePathretains the URL that it is passed... E.g.:.
Response.Redirectdrops the URL and redirects to the appropriate tab... E.g.:.
public void SendToNewUrl(string url, bool spider, System.Web.HttpApplication Appl) { if (spider) { Appl.Context.RewritePath(url); } else { Appl.Response.Redirect(url, true); } }
Fingers crossed, you guys get the gist of the system. This is not a difficult task, but hopefully will make your DotNetNuke sites a little easier for navigation or spidering.
Review the
GetTab function for more in depth string passing. I know I should have used regular expressions to get to the guts of it all..
There are plenty of additional ways to improve this functionality.
Finally, a big thank you goes out to the Code Project and DotNetNuke and their respective communities.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/aspnet/dnn2url_rewrite.aspx
|
crawl-002
|
refinedweb
| 425
| 53.07
|
Products and Services
Downloads
Store
Support
Education
Partners
About
Oracle Technology Network
Goetz Lindenmaier from SAP AG found an issue loading jni libraries where the stack is set to ???execstack???, or where this flag is not set at all. After loading such a library, the stack guard pages are lost and though stack overflows can no more be detected.
see comments
PUBLIC COMMENTS
-------- Original Message --------
Subject: Request for review [new bug](S): Stack guard pages are no more
protected after loading a shared library with executable stack.
Date: Wed, 26 Oct 2011 14:07:22 +0200
From: Lindenmaier, Goetz <###@###.###>
Hi,
I am Goetz Lindenmaier from SAP AG, working in the SAP JVM Jit Team.
We found an issue loading jni libraries where the stack is set to
???execstack???, or where this flag is not set at all. After loading such a library, the
stack guard pages are lost and though stack overflows can no more be detected.
We had a lot of libraries in our test systems where the execstack flag was missing.
This webrev contains a small test and a possible fix for the problem:
As I don???t have a bug ID yet, I just used an arbitrary number to make the test executable with jtreg . Please open a bug for this issue.
I???ll fix the test if I have a proper number.
This problem exists since 7019808, which adds ???z noexecstack to the linker command on linux.
The fix I propose does the following: It reads the elf file to detect whether loading the library can change stack properties. If so, it requests a safepoint and loads the library during the safepoint. After loading the library, it tests the guard pages with SafeFetch. If they are no more protected, it reprotects the guard pages of all Java stacks.
There might be cases where reading the elf file does not suffice to know that the stack access rights will change.
As I understand, if a properly compiled jni library loads another library which requires execstack, the stack changes access rights, too.
This is detected by the SafeFetch, and the stacks are fixed. But in this case the stacks are unprotected for a short time.
To improve on this, one would have to request a SafePoint for each library loaded. But anyways, there is no bulletproof solution, as the loaded code could do an mprotect() at some point.
Best regards,
Goetz.
See also:
One possible fix for all but the main thread is for os::Linux::default_guard_size() to return (StackYellowPages+StackRedPages)*page_size() for java threads. However there is a comment saying guard pages are expensive, though it's not clear why letting pthreads do it is more expensive than letting hotspot do it. They both use mprotect.
The reason this fix doesn't work for the main thread is because the main thread is created by the launcher so it doesn't call os::create_thread().
See also:
David said the following on 08/29/10 09:03:
> Douglas said the following on 08/28/10 23:42:
>> I think the long term solution is to modify the
>> __make_stacks_executable function in libc/nptl/allocatestack.c such
>> that it only adds the PROT_EXEC bit to each thread's stack and leaves
>> the current values for the PROT_READ and PROT_WRITE bits as is.
>
> I find it hard to believe that they would overwrite the permission bits
> rather than augment them in the first place. I can't wait to find out
> what justification they think they had for doing that. :(
<sigh> Looks like it is because the OS gives no way to do otherwise :(
mprotect doesn't allow you to add to the existing permissions only set
them - and there's no API that I can see to query page permissions.
There's some interesting discussion in this LK thread:
David
The current plan of action is:
+ import SAP patch
+ add a warning message whenever an "old" DLL is loaded
+ add diagnostic message in error log in case VM crashes
due to an indirectly-loaded "old" DLL
Need to backport from 8009588 -> HSX24:
URL:
User: amurillo
Date: 2013-03-15 22:31:21 +0000
URL:
User: zgu
Date: 2013-03-29 16:53:44 +0000
Note: also contains the fixes in JDK-8010389.
|
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=7107135
|
CC-MAIN-2014-10
|
refinedweb
| 712
| 69.92
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.